Frequentist or Bayesian, Who am I?

I am a Software Architect and an Independent Researcher who has designed and developed data products from Ideation to Go To Market at enterprise scale through my career. I am a perpetual learner who learn new things and make them work. My passion is Programming and Mathematics for Deep Learning and Artificial Intelligence. My focus area is Computer Vision and Temporal Sequences for Prediction and Forecasting.

Selected Reads Selected Watch More About Me

Selected Writes - AI, ML, Math

RALPH Loop: Building Self-Improving AI Systems WITHOUT Claude

Posted May 02, 2026 ‐ 20 min read

Somewhere between “this agent will change everything” and “why is this output still confidently wrong,” it hit me—mid-debug, staring at a beautifully orchestrated multi-agent system that looked impressive on architecture diagrams and completely fell apart in practice. Like every builder with a half-charged laptop and an overconfident belief in “just one more agent,” I did what we all do: blamed the model. Maybe switch to Claude, maybe wait for the next breakthrough from Anthropic, maybe add more layers because clearly the problem was lack of intelligence. It wasn’t. The system wasn’t failing because it couldn’t think... it was failing because it never got the chance to think again. We’ve quietly optimized for first-pass answers in a world where the real strength of these models shows up in reflection, critique, and iteration. What I needed wasn’t a better model or a more elaborate agent hierarchy; it was a loop... a simple, almost embarrassingly obvious one... that forces the system to generate, question itself, and improve before anyone trusts the output. That loop is what I now call the RALPH Loop, and this post is about why it works, how to build it from scratch, and why it might matter more than whatever model release you’re waiting for next.

Reflexive by Default: The Role of Human Beings in an AI-Driven World

Posted Apr 13, 2025 ‐ 10 min read

Like every self-respecting tech bro armed with a half-charged MacBook and a ChatGPT tab on speed dial, I too believed I was thinking. You know... solving bugs, crafting flows, building features. Classic human stuff. Then one day, mid-debug spiral, I caught myself whispering: “ChatGPT, explain this bug like I’m five.” And boom... insight. Progress. Sanity. That’s when it hit me: I wasn’t “thinking” anymore. I was prompting. Reflexively. No long walks. No rubber duck. Just straight-up neural outsourcing. At first, it felt like cheating. Then it felt like genius. Now? It just feels normal. This post is about the shift... where thinking (alone) became optional, and thinking (with an AI) became default. Spoiler: it’s not about losing your edge. It’s about sharpening it... with silicon. Welcome to the era where brains and bots team up... and we stop pretending we’re doing it solo.

Pair Programming with an AI: Debugging Profile Picture Uploads with Claude-3.7

Posted Mar 02, 2025 ‐ 12 min read

I’ve been stuck on a problem for a while now. You know that kind of bug... the one that refuses to budge no matter how many times you rewrite the code, tweak the request payload, or double-check the backend logs. Today, I decided to try something different. Instead of debugging alone, I brought in a peer programmer... except, this time, my partner wasn’t human. Enter Claude-3.7 Sonnet-Thinking... an AI that didn’t just spit out code snippets but actually worked through the problem like a real collaborator. And trust me, this thing wasn’t just suggesting fixes... it was thinking, iterating, making mistakes, correcting them, and even rewriting parts of my backend and frontend in an attempt to solve the issue. For the first time, I felt like I was debugging with an AI, not just using one.

How I Cut My Infrastructure Costs by 35% Overnight - A Startup Survival Checklist

Posted Oct 05, 2025 ‐ 12 min read

Picture this: It's 6 AM, I'm nursing my third cup of coffee (don't judge), and I'm staring at my GCP billing dashboard like it personally offended my mother. The numbers are glowing with the enthusiasm of a neon sign in Vegas, except instead of promising jackpots, they're promising bankruptcy. A $$,000+ per month. For a healthcare startup that's still figuring out if doctors actually want AI assistance or just want us to leave them alone. That's when it hit me... not enlightenment, not a business epiphany, but pure, unadulterated panic. At this burn rate, QIQ Health would run out of runway faster than a paper airplane in a hurricane. The unit economics were laughing at me, and my cap table was about to become a cautionary tale at startup meetups. But here's the plot twist: within 24 hours, I managed to slash that bill by over 35%. No venture debt, no emergency funding rounds, no selling my kidney on the dark web. Just some good old-fashioned detective work and the kind of infrastructure archaeology that would make Indiana Jones proud.

Evaluating Large Language Models Generated Contents with TruEra’s TruLens

Posted Mar 17, 2024 ‐ 15 min read

It's been an eternity since I last endured Dr. Andrew Ng's sermon on evaluation strategies and metrics for scrutinizing the AI-generated content. Particularly, the cacophony about Large Language Models (LLMs), with special mentions of the illustrious OpenAI and Llama models scattered across the globe. How enlightening! It's quite a revelation, considering my acquaintances have relentlessly preached that Human Evaluation is the holy grail for GAI content. Of course, I've always been a skeptic, pondering the statistical insignificance lurking beneath the facade of human judgment. Naturally, I'm plagued with concerns about the looming specter of bias, the elusive trustworthiness of models, the Herculean task of constructing scalable GAI solutions, and the perpetual uncertainty regarding whether we're actually delivering anything of consequence. It's quite amusing how the luminaries and puppeteers orchestrating the GAI spectacle remain blissfully ignorant of the metrics that could potentially illuminate the quality of their creations. But let's not be too harsh; after all, we're merely at the nascent stages of transforming GAI content into a lucrative venture. The metrics and evaluation strategies are often relegated to the murky depths of technical debt, receiving the customary neglect from the business overlords.


Selected Reads - Papers, Articles, Books

Density Estimation using Real NVP - GOOGLE RESEARCH/ICLR

This paper is going to change your perspective on AI research tangentially, if you stepping into Probabilistic DNNs. Start from here for unsupervised learning of probabilistic model using real-valued non-volume preserving transformations. Model natural images through sampling, log-likelihood and latent variable manipulations read...

The Neural Code between Neocortical Pyramidal Neurons Depends on Neurotransmitter Release Probability - PNAS

This 1997 paper brings bio-physics, electro-physiology, neuroscience, differential equations etc in one place. A good starting point to understand neural plasticity, synpases, neurotransmitters, ordinary differential equations read...

Using AI to read Chest X-Rays for Tuberculosis Detection and evaluation of multiple DL systems - NATURE

Deep learning (DL) is used to interpret chest xrays (CXR) to screen and triage people for pulmonary tuberculosis (TB). This study have compared multiple DL systems and populations with a retrospective evaluation of 3 DL systems. read...

Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization - IEEE/ICCV

How to approach compute complexities, ie time and space complexity problems while designing a software system to avoid obvious bottlenecks in an abstract fashion. read...

Evolve Your Brain: The Science of Changing Your Mind by Joe Dispenza - BOOK

Ever wonder why you repeat the same negative thoughts in your head? Why you keep coming back for more from hurtful family members, friends, or significant others? read...

Selected Watch - Social Media/OTT Content

Eureka : Dr V. Srinivasa Chakravarthy, Prof, CNS Lab,IITM

Interaction with Prof. Chakra, Head of the Computational Neuroscience Lab. Computational neuroscience serves to advance theory in basic brain research as well as psychiatry, and bridge from brains to machines. watch...

Quantum, Manifolds & Symmetries in ML

Conversation with Prof. Max Welling on Deep Learning with non-Euclidean geometric data like graphs/topology or allowing networks to recognize new symmetries watch...

The Lottery Ticket Hypothesis

Yannic's review of The Lottery Ticket Hypothesis - A paper on network optimization through sub-networks. This paper is from MIT team watch...

Backpropagation through time - RNNs, Attention etc

MIT S191 Introduction to Deep Learning by Alexandar Amini and Ava Soleimany. Covers intuition to Recurrent LSTM, Attention, Gradient Issues, Sequential Modelling etc watch...

What is KL-Divergence?

A cool explanation of Kulbuck Liebler Divergence by Kapil Sachdeva. It declutters many issues like asymmetry, loglikelihood, cross-entropy and forward/reverse KLDs. watch...

Overfitting and Underfitting in Machine Learning

In this video, 2 PhD students are talking about overfitting and underfitting, super important concepts to understand about ML models in an intuitive way. watch...

Attitude ? Explains Chariji - Pearls of Wisdom - @Heartfulness Meditation

Chariji was the third in the line of Raja Yoga Masters in the Sahaj Marg System of Spiritual Practice of Shri Ram Chandra Mission (SRCM). Shri Kamlesh Patel also known as Daaji, is the current Guide of Sahaj Marg System (known today as HEARTFULNESS ) and is the President of Shri Ram Chandra Mission. watch...