Arun Prakash A
Thank you for coming here! Like you, I am also intrigued by problems in the field of NLP, CV and Speech . I work with the transformer architecture and its variations and all the concepts revolving around it! I build models using native Pytorch (for more customization) and Hugging Face (with Pytorch backend).
You can find the notebooks here if you want to get started with Pytorch and HF. Finally, I am a GPU poor guy having access to only V100, A100 and L4 nodes which prevents me from pre-training language models from scratch on large-scale datasets!
My journey started with traditional Signal Processing algorithms. I was fascinated by the beauty of Fourier Transformation (FT) applied to linear time-invariant systems. I didn’t appreciate the power of the Gradient Descent (GD) algorithm when I learned it for the first time in adaptive signal processing algorithms. A paradigm shift happened when I witnessed the extention of GD (backpropagation algorithm) taking the entire vision field by storm.
I started learning neural networks in detail only after 2015. Like FT, I was fascinated by the attention mechanism and its clever application in the transformer architecture. I learned a lot about the approach to science and engineering from books written by Richard Feynman and Richard Hamming.