LLMs and the instant gratification that comes along
Context
This was orginally written in one of my emails at work ranting about people being too dependent on LLMs and copilots.
And how there is no joy in coding like this anymore. Rather how you might enjoy so much more if you write the code and it fails and then you bang your head and then it works.
The joy that comes with it is unbeatable.
But lately, that’s being replaced by a quick "Tab" key.
We’ve traded the joy of the struggle for the speed of a suggestion. We’re getting the answers faster than ever, but we’re losing the "why" along the way.
Orignally drafted and written on Novemeber 17th, 2025
RE: Orchestration Requirements
Interesting. Some of the pointers are highly theoretical and LLM generated.
Funny how it just says: I asked LLM suite below question What are the capabilities desired from a large job orchestration platform that managers tens of thousands on job, their dependencies, metrics etc in a public cloud environment
This question doesn’t cater to the existing system as context and answers a bookish answer. Funny how LLMs are changing the way people think.
In the past few weeks, I have had some weird experiences — I reached out to help from senior developers in my team since I had been debugging an issue for 24 hours and found it apt to reach out then and escalate for help.
Instead of addressing the problem from a first-principles thinking, they asked me to use LLMSuite. I explained I had used Copilot extensively enough and explored a bunch of alternatives, and it didn’t work. They were so reluctant and instead of thinking about the problem, they jumped to looking for the solution. They asked me to share my screen and wanted me to type the prompt they narrated (which was a basic prompt in the first place, I’d tried better ones already). It gave a couple of options, and they asked me to try it. I explained why some of them won’t work. They said, “Okay then tell him that, tell LLMSuite that”. I soon realized this conversation “was of no help”. I said I will look at these options and figure it out and left the call.
Thankfully one solution clicked that night itself.
Anyway. I realized “Instant Gratification” as a phenomenon (previously just linked to social media) is expanding to workplaces as well. People rushing to use LLMs reduces scope of creativity at all.
LLM is trained on text and solutions that already exists — originally created by humans in the first place. The solutions (while they may be accurate and best in the already existing) will never be out of the box.
That is also why it is easy to identify LLM form of text, because it gives the “best sounding answer”. It looks for the most fitting (common and expected) chain of words as a reply. LLMs have cracked it. An LLM has no concept of “right” or “wrong”. It just works on the basis of correlations. If a lot of other people on the Internet have used a specific term in connection with the other, these terms will be used together.
This also reminds me of Reny’s example, when asked the question: When will A overtake B? It will give the right looking answer based on the question. (Context: A friend gave an example about a physics question where the solution was B overtakes A, but based on the frame of the question - When will A overtake B? The answer indeed changed. The models I believe are more mature now.)
Apologies for the big rant, but I am tired of (people, including me) leaning towards Instant Gratification.
Gratifying in terms of getting a summary points of a long article or an AI generated summary of a Youtube Video or just rushing to fix the solution without understanding why the problem is coming in the first place.
It is disappointing, and I am tired.
I want long form content. I want the focus that is needed for long form content.
I want my attention span back.
Books, documentaries, movies. Long walks. Yoga.
Conversations without having to look at the phone.
Good old google searches and scanning a couple of articles to look for existing solutions.
I don’t need perfection, I need mistakes.
I need inconsistencies in writing, mistakes that makes us human in the first place.
Tushar
Fin
LLMs are the new instant gratification source.
And in our hurry to reach the finish line, we’ve forgotten how to enjoy the walk.