31 December 2025

Corporate shit posting

 Anyone who has been reading my infrequest blog posts will not be surprised by the extreme levels of the contempt and disgust i have for fakers - especially the corprorate fat cats. Two things - fake humility and boasting both are annoying. There is a word for fake humility - which i forget. Examples abound in that useless platform (suitable for narcissists) called LinkedIn - "humbled to be awarded the best nose hair picker in the 2025 cohort; grateful to grease merchant 1, group head, blah, blah". There are enough parodies of these kind of junk posts, on LinkedIn itself.  I am sure LinkedIn managements itself encourages this kind of shit posting - increases content maybe.

The other kind of posts which provoked me to unleash are the ones "As the year 2025 draws to a close, i spent the holidays reflecting on the challenges..." . Translated into simple words, this basically means "whereall did i corporate shat and created a ruckus and tried to inflate my importance to the organisation. Did i emit enough personalised methane to cause a noticeable stink in a corporate room already reeking of arse vapour? would it be enough to keep gulping crores to feed my ever cavernous family who cannot be denied their 50 lakh schools kitty parties and my 20 crores villa?" etc. 

Many a time, i wonder whether these corporate methane emitters really breathe in their own bullshitting. Do they practice their vomit in front of mirror? Do their "craft" their infantile, inane, patently fake and abusrd ode to their own diminishing intellect and increasing greed all the while fawning upon those whom they think will benefit theiir greed? An example - i have personally been in a room full of senior people where the centre head cum centre head told openly "the bus is leaving. cling onto the dorrs windows or any which way you can get onto the bus." This was in the context of a famous healthcare company shutting down research centres all over the world; there was lot of unrest, speculation and discontent amongst the employees, naturally. To me, this was shocking. What i was expecting from the "leadership" was sobering reality - a retrospect of the steps which led to a once famous brand bleeding like a stuck pig; obnoxious Diversity candidates being pushed to levels of incompetence far beyond Peter principle and so on. However, what i got to hear was just the opposite. The so called leader of 30 years of experience whining like a (well the analogies in civilised language fail me here at this point) something. Without any surprise, this limpet could cling on to the job despite wholesale "bloodletting" of many talented scientists and engineers; and maybe some much needed spring cleaning was also done.

Recently, I saw this trasformational leader "reflecting" on the pearls he had discharged earlier in the year - maybe internal corporate bowel movements. Apparently, his gem to humanity was something like "artificial intelligence is not enough; intelligence is the key". For sure, this sack of protoplasm has the intelligence to be "loyal", the kind which Bricktop disparagingly mentioned before gesturing the henchmen to off the toady in the movie Snatch. 

23 August 2025

On a mechanism to prevent or minimise code stealing

 It is very well known that a large part of the modern proportion of the software industry is built on open source software; i do not know how much of it is really on stolen software. Of all the industries, the software industry is the greediest - licenses are getting more and more cost sucking. to draw a parallel, how would it be if these software cfo had to pay for every turn of the wheel they apply in their swanky cars? or if they had to pay for the car's running according to the number of the passengers or weight or volume or some maximum of these quantities? These greedy ghouls know nothing except to screw the last coin. and there are compliant and willing software "engineers" who would be the parasites. 

this leads me to the next theme or rather the original theme of large corproations crooks, infested by greedy snouts and pushing stolen code in "production"; though what it "produces" is moot, except for money to the snouts already buried in the trough. Not only these super greedy snouts have sucked all code but also data. There are cases of these mega corporations leeching pirated books - for shame. Now these LLM "agents" and other garbage; not content with looting daylight robbery of code and knowledge are also not respecting robots.txt. Quite a few networking companies report that the majority of the traffic comes from "AI" crawlers; basically googlles, miscorfites, facepalmbooks of the world have unleahed millions of energy hungry processes to molest the internet every instant.

It is also known that projects like ffmpeg, numpy and many such in the open source world are the ones which really are the foundational models for the corporate snouts. and these leeches are proud to announce the "mafia" of their employyes or ex employees - a very apt term chosen bythemselves. Though they meant it in a different way - to boost their own egos with some filmlike bullshit. The god fathers of AI for example. No self respecting scientist would be remotely interested in being called godfather of an area. 

so, i wonder if it is possible to introduce some kind of "agentic AI", to turn the game against these corporate snouts. the moment the "agentic AI" figures that someone is leaching the code, it should generate buggy code and randomly delete the repositories.

Apart from the rant above; it is time that the sane humans came togther to break the strangehold of these greedy snouts and save the world from the every increasing burden of energy consumption and loss of diversity due to these AHoles.

02 November 2024

Offbeat Datasets

Here is a collection of datasets which I found to be a bit offbeat. It might be that the datasets may be well-known in the domains, but just that the curiosity factor impels me to collect this information.
Link to the file

Annoying terms in post modern machine learning literature

 A collection of extremely stupid, artificial(sic) and pretentious terms which have sprouted around the "AI" hype. One day, if i get time, i will attempt to write parody definitions for the terminology; as of now i merely list them below.

Ingest, Consume, Foundation Model(God only know what is "foundational" about it), Artificial General Intelligence(Really?! General intelligence?), "practitioners", "in production"(what does it produce is what i wonder), Foundry (all i can think of such foundries is producing gas), "digital twin", "stand up". "training"(you can train a monkey or a dog; you can even train a human to imitate others like apes but how can you "train" a mathematical model? one can optimise its parameters.)

Irritating fake pretend words, mostly encountered in LinkedIn:

"chuffed"(another annoying term appearing a lot in LinkedIn), "reach out", "update"(one person asking for job recommendations is then asking for an "update").


Agentic Workflows: God only know who dreams up of these terminologies. No one really knows what it really means but it is already an "in thing" with enterprise unleashing agentic workflows. Basically, it seems some unnecessarily complicated natural language processing(NLP) with distributed APIs and calling RPC. All this packed under some Tools: and pseudo philosophical babble on "what is an agent". Since,  postmodern artificial intelligence has conferred the title of Godfather to some of the researchers - a privilege not even Fourier(for example) has been granted.

Basically, these guys are so stuffed up in their own hype cycle that they might believe that they exhale pure oxygen and the world could survive on their "feature rich" breath "embedding molecules of oxygen".


07 May 2023

A simple tool for searching PDFs on your local storage

For many years i have been facing this simple problem. I download a lot of PDFs; usually they are papers, electronic academic books in the PDF format and the life. There may have been a few non academic content, such as fiction. Earlier I used to categorise them while saving them on the local storage. As an aside, this local storage is the hard disk but this can also be a network drive which is reachable by the local file commands (e.g. ls or file explorers). A lot of the time, it was or still is possible for me to recall which directory a particular paper had been saved into. As the number of papers or files keeps growing, it becomes difficult to track or remember. Therefore, it was on my mind for many months to write a small software which I could use to search the contents of the PDFs; hopefully quickly and efficiently. I did not want an exact and accurate search; if at all this was possible.

The repository here is the fruition of such efforts. This is a pure Python package. I have written an extensive README. Feel free to use it. For both positive, negative suggestions and suggestions for improvement, feel free to contact me at the e-mail ID mentioned in the README. 

Of course, this can be extended to search DOC files to; drop me a line if you want that functionality.

18 February 2023

A very good read on supervised contrastive learning

Reference:

Supervised Contrastive Learning

Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models. Modern batch contrastive approaches subsume or significantly outperform traditional contrastive losses such as triplet, max-margin and the N-pairs loss. In this work, we extend the self-supervised batch contrastive approach to the fully-supervised setting, allowing us to effectively leverage label information. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. We analyze two possible versions of the supervised contrastive (SupCon) loss, identifying the best-performing formulation of the loss. On ResNet-200, we achieve top-1 accuracy of 81.4% on the ImageNet dataset, which is 0.8% above the best number reported for this architecture. We show consistent outperformance over cross-entropy on other datasets and two ResNet variants. The loss shows benefits for robustness to natural corruptions and is more stable to hyperparameter settings such as optimizers and data augmentations. Our loss function is simple to implement, and reference TensorFlow code is released at this https URL.

31 December 2022

Confidence Intervals references

Here are couple of references for CI. Without getting too technical on the computations, these provide the motivation, "how to" and more importantly, the interpretation of the CI.
The key lines for me:
"The theoretical basis for the calculation of a CI includes the assumption that (he study can be repeated many times. Each time, different results would be obtained through the selection of a slightly different sample of patients from the population (sampling variability). Each trial would therefore also produce a different 95% CI. If the trial were performed 100 times, then, on average, 95 of the 95% CIs calculated would contain the true value (and 5 would not). In practice, however, we usually perform a study only once. Once we actually perform a trial, and calculate a single 95% Cl, the true value either lies within this confidence interval or it does not. Therefore, in referring to the particular results from a single study, it is not correct to state that there is a 95% chance or probability that the true value lies within the CI. It is correct to say that if the true value lies outside the 95% CI, the likelihood of obtaining the data observed in the study is 5% or less."
Emphasis is mine.