Category Attentions

Hadamard Attentions: The Mighty Attentions Optimized

Attention is the mightiest layer so far, which symbolizes its parent paper that “Attention is all you need” in true sense. Almost all tasks, be it images, voice, text, reasoning,...

Category Transformers

Hadamard Attentions: The Mighty Attentions Optimized

Attention is the mightiest layer so far, which symbolizes its parent paper that “Attention is all you need” in true sense. Almost all tasks, be it images, voice, text, reasoning,...

Category Deep Learning

Dropout Inherently Enables Contrastive Self Supervised Pretraining

Self-Supervised training is a eureka concept, where machines don’t need labels to learn concepts, It started with a lack of tagged data and solution being self-supervised training. However, in recent...

Deep Learning Models are Reinforcement Learning Agents

Deep learning has reached its heights while Reinforcement learning is yet to find its moment. But today we will take a problem from the famous deep learning space and map...

Min-Max Loss, Self Supervised Classification

We left last post Min Max Loss Classification Example on the promise to demonstrate a self supervised classification example. What is Self Supervised Classification. Self Supervised Classification is a task...

Min-Max Loss, Revisiting Classification Losses

In continuation to my Partial Tagged Data Classification post, We formulate a generic loss function applicable to all task(classification, metric learning, clustering, ranking, etc)

Multi Single Class Learning, Classifying Partially Tagged Data

Machine Learning requires a large amount of clean data for the models to be trained. But that’s rarely the case in reality. A Scenario in real life is clicks/likes data,...

Hadamard Attentions: The Mighty Attentions Optimized

Attention is the mightiest layer so far, which symbolizes its parent paper that “Attention is all you need” in true sense. Almost all tasks, be it images, voice, text, reasoning,...

Category NLP

Hadamard Attentions: The Mighty Attentions Optimized

Attention is the mightiest layer so far, which symbolizes its parent paper that “Attention is all you need” in true sense. Almost all tasks, be it images, voice, text, reasoning,...

Category Loss Function

Min-Max Loss, Self Supervised Classification

We left last post Min Max Loss Classification Example on the promise to demonstrate a self supervised classification example. What is Self Supervised Classification. Self Supervised Classification is a task...

Min-Max Loss, Revisiting Classification Losses

In continuation to my Partial Tagged Data Classification post, We formulate a generic loss function applicable to all task(classification, metric learning, clustering, ranking, etc)

Multi Single Class Learning, Classifying Partially Tagged Data

Machine Learning requires a large amount of clean data for the models to be trained. But that’s rarely the case in reality. A Scenario in real life is clicks/likes data,...

Category Classification

Min-Max Loss, Revisiting Classification Losses

In continuation to my Partial Tagged Data Classification post, We formulate a generic loss function applicable to all task(classification, metric learning, clustering, ranking, etc)

Category Self Supervised Learning

Dropout Inherently Enables Contrastive Self Supervised Pretraining

Self-Supervised training is a eureka concept, where machines don’t need labels to learn concepts, It started with a lack of tagged data and solution being self-supervised training. However, in recent...

Min-Max Loss, Self Supervised Classification

We left last post Min Max Loss Classification Example on the promise to demonstrate a self supervised classification example. What is Self Supervised Classification. Self Supervised Classification is a task...

Category Reinforcement Learning

Deep Learning Models are Reinforcement Learning Agents

Deep learning has reached its heights while Reinforcement learning is yet to find its moment. But today we will take a problem from the famous deep learning space and map...

Category Pretraining

Dropout Inherently Enables Contrastive Self Supervised Pretraining

Self-Supervised training is a eureka concept, where machines don’t need labels to learn concepts, It started with a lack of tagged data and solution being self-supervised training. However, in recent...

Category Dropout

Dropout Inherently Enables Contrastive Self Supervised Pretraining

Self-Supervised training is a eureka concept, where machines don’t need labels to learn concepts, It started with a lack of tagged data and solution being self-supervised training. However, in recent...

Category VAE

Dropout Inherently Enables Contrastive Self Supervised Pretraining

Self-Supervised training is a eureka concept, where machines don’t need labels to learn concepts, It started with a lack of tagged data and solution being self-supervised training. However, in recent...