Skip to main content

Dimensionality Reduction



Hi, In this blog I will be discussing some trade-offs we make while choosing a dimensionality reduction technique for our problem. Now, let's jump into this directly.

Dimensionality reduction(DR) reduces higher dimensional data to lower dimensions. Or we can say that DR maps -dimensional data into -dimensions (), (), where these new -dimensions hold nearly all of the relevant information about the original data. Sometimes DR results can show clusters of data that are not even present in the original data and sometimes it can map two neighbors from the higher dimension far into the lower dimension. So let's discuss and compare some methods which can prevent these problems. I will be discussing t-SNE, UMAP, and TriMap in this blog. 

1. t-SNE (t-distributed Stochastic Neighborhood Embedding)

t-SNE uses the distance between two points in higher dimensions and maps it to the lower dimension.  

where  is computed by using binary search in the equation, Perplexity = and the perplexity is a user-defined parameter.

Then define symmetric probability, 

  

Symmetric probability in lower dimension, 

Loss function,  

  

now define mapped data points using the multivariate normal distribution .

2. UMAP(Uniform Manifold Approximation and Projection)

UMAP is a two-step method, first creating a graph for high-dimensional data and second, optimizing for the low-dimensional graph layout.

3. TriMap

TripMap considers three points(triplets) in high-dimension and finds an embedding that preserves the ordering of distances within a subset of triplets. 

_____

We can do mapping of the data either in some parametric way or non-parametric way. Parametric methods learn some function, and while mapping a new data point we just need to plug the new data and we have the mapped data point, but in the case of non-parametric methods we need to learn the function again as we get new data points. 


References:

  1. arXiv:2012.04456
  2. arXiv:1802.03426
  3. arXiv:1910.00204


Comments

Wow concepts!!

Basis Change and Matrix Approximation with SVD

Basis Change:  In this blog we will see how the transformation matrix for any linear mapping changes with the change in the basis. Let's assume we have a linear mapping   with & , ordered bases for vector spaces  and are and respectively and we are changing these bases to  and for vector spaces  and respectively. Also assume the transformation matrix in case of bases & is and after changing the bases to & it becomes .  In this blog I will be taking subscripts , , and to represent vectors in the bases , , and respectively. As we know basis vectors spans the entire vector space, so will span the entire vector space , so will also span the basis vectors of . So we can say that each new basis vector of will be the linear combinations of the basis vectors of and we can write it as, . We store these coefficients in column-wise manner in a matrix (say ), so entries in the column of will be the coef...

Linear Regression

Linear regression is used to predict real-valued output for a given input data point . Linear regression establishes a relationship of dependent variable with the features of the input data with an assumption that the expected value of the output(dependent variable) is a linear function of the input ( ).     Let's assume our training dataset is where is the number of data points and is the number of dimension or number of features in our dataset. From now on we will write our dataset as where each for is a column vector.  We can write the output as: or we can write it as: Before computing the final weights for this equation, we need to figure out what degree we should choose. We usually select the degree for which we get less mean squared error(MSE). The most common form of linear regression is degree 1 form: There are two ways by which we can estimate the parameters: Normal equation: Weight vector is estimated by matrix multiplication o...

Vision Transformers

Vision transformer(ViT) is a transformer based deep learning model primarily used for image classification task. It processes images by dividing them into patches, then learns relationships between these patches using the Transformer architecture. After processing, it generates a classification output, just like other models designed for image classification, such as Convolutional Neural Networks (CNNs).  ViTs work in the following manner:  1. Patch embedding: It divides images into patches. So, for a image, number of patches , then and after stacking these patch embeddings we will get patch embedding matrix .  2. Positional embedding: Since Transformers don’t inherently handle spatial information like CNNs, positional encodings are added to each patch embedding to provide information about the position of each patch in the image. .  3. Attention mechanism: , , , , .  4. Feed-forward network: ...

Sequence Networks

Hi, in this blog I will be covering most of the sequence networks. Sequence networks can have either input as sequence or output as sequence or both(input and output) as sequences. We can sub-divide these sequence networks in the following three ways: Vec2Seq Seq2Vec Seq2Seq 1. Vec2Seq(Sequence Generation):           where is hidden state and (initial hidden state distribution). For categorical and real-valued output, the distributions are given by: The above generative model is called Recurrent Neural Network(RNN) . 2. Seq2Vec(Sequence Classification):           In classification task, the output is class label, We get better results if we let the hidden states depends on past as well as future context bidirectional RNN . Then we define, hidden state at time , . where 3. Seq2Seq(Sequence Translation):           Aligned case where the initial state, ...

Global and Local Models

Recently we have seen a number of chatbots(chatGPT, BARD etc). These comes under LLMs(large language models) which are trained on huge datasets. I will discuss the methods which these models follow to give personalized and accurate responses. I will break this into three steps: Information gathering: gathers users search history, topics discussed, queries etc Information clustring: clusters related concepts, facts and ideas Personalized responses: searches information cluster to find most relevant and helpful information The information clustering process is a knowledge acquisition and adaptation process, which shares some similarity with online learning but has some key differences. Knowledge acquisition(KA) vs subset selection(SS): KA is a broader concept, it encompasses various methods and processes of gathering and accumulating new knowledge or information. This includes learning from experiences, interactions, readings, and various sources while SS is a narrower concepts. It re...