07 May 2023

A simple tool for searching PDFs on your local storage

For many years i have been facing this simple problem. I download a lot of PDFs; usually they are papers, electronic academic books in the PDF format and the life. There may have been a few non academic content, such as fiction. Earlier I used to categorise them while saving them on the local storage. As an aside, this local storage is the hard disk but this can also be a network drive which is reachable by the local file commands (e.g. ls or file explorers). A lot of the time, it was or still is possible for me to recall which directory a particular paper had been saved into. As the number of papers or files keeps growing, it becomes difficult to track or remember. Therefore, it was on my mind for many months to write a small software which I could use to search the contents of the PDFs; hopefully quickly and efficiently. I did not want an exact and accurate search; if at all this was possible.

The repository here is the fruition of such efforts. This is a pure Python package. I have written an extensive README. Feel free to use it. For both positive, negative suggestions and suggestions for improvement, feel free to contact me at the e-mail ID mentioned in the README. 

Of course, this can be extended to search DOC files to; drop me a line if you want that functionality.

18 February 2023

A very good read on supervised contrastive learning

Reference:

Supervised Contrastive Learning

Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models. Modern batch contrastive approaches subsume or significantly outperform traditional contrastive losses such as triplet, max-margin and the N-pairs loss. In this work, we extend the self-supervised batch contrastive approach to the fully-supervised setting, allowing us to effectively leverage label information. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. We analyze two possible versions of the supervised contrastive (SupCon) loss, identifying the best-performing formulation of the loss. On ResNet-200, we achieve top-1 accuracy of 81.4% on the ImageNet dataset, which is 0.8% above the best number reported for this architecture. We show consistent outperformance over cross-entropy on other datasets and two ResNet variants. The loss shows benefits for robustness to natural corruptions and is more stable to hyperparameter settings such as optimizers and data augmentations. Our loss function is simple to implement, and reference TensorFlow code is released at this https URL.

31 December 2022

Confidence Intervals references

Here are couple of references for CI. Without getting too technical on the computations, these provide the motivation, "how to" and more importantly, the interpretation of the CI.
The key lines for me:
"The theoretical basis for the calculation of a CI includes the assumption that (he study can be repeated many times. Each time, different results would be obtained through the selection of a slightly different sample of patients from the population (sampling variability). Each trial would therefore also produce a different 95% CI. If the trial were performed 100 times, then, on average, 95 of the 95% CIs calculated would contain the true value (and 5 would not). In practice, however, we usually perform a study only once. Once we actually perform a trial, and calculate a single 95% Cl, the true value either lies within this confidence interval or it does not. Therefore, in referring to the particular results from a single study, it is not correct to state that there is a 95% chance or probability that the true value lies within the CI. It is correct to say that if the true value lies outside the 95% CI, the likelihood of obtaining the data observed in the study is 5% or less."
Emphasis is mine.

27 August 2022

docker startup command

VOLS="-v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro " # add any other mounts you want
echo Exported paths $VOLS
docker run -it --user $(id -u):$(id -g) --rm --gpus all --ipc=host --net=host $VOLS projectmonai/monai:latest bash

05 August 2022

Bland Altman Analysis

A nice paper on Bland Altman analysis.
Understanding Bland-Altman Analysis, Davide Gavarina, Biochemia Medica, 2015

Label Smoothing Analysis link

Label smoothing introduced by the Inception authors.
A decent analysis seems to be in this work: