How to Use Journal Articles for your ML Research


Many data science and machine learning companies are looking for researchers who are capable of identifying the type of problem they face, then finding relevant journal articles in the academic literature, picking the one with the best performance or cost/benefit applicability to the problem, and replicating it enough to apply the model to their problem.


How to Get Started with the Search

In the not so distant past, when searching for the latest algorithms, typically the best one could hope for was some mathematical formulas and diagrams, or references to them, and maybe some pseudo code. Thankfully, over the last few years there has been a huge push for reproducibility and transparency in the machine learning literature. Additionally, groups such as papers with code and hugging face have been working to make these models easier to access and use.

To get started, all you have to do is think about the problem you are trying to solve. For example if you are trying to improve the accuracy of some image measurement and quantification algorithm, you may want to look into the field of super resolution to improve the underlying image quality.

If you are working on identifying specific features in images, you could look at the latest in semantic segmentation, and possibly even see if there is already research on identifying the specific object type you are interested. If you want to understand some characteristics of text, look into natural language processing. The list goes on and on…

Lately, I have been more interested in super resolution because I thought it could help improve the performance of some algorithms I created to analyze scientific images. From my early research into computer vision, I had seen super resolution improve the quality of images taken of various aspects of general life, and I wondered what it would take to improve the quality of my scientific images. Preparing scientific image datasets and training the neural network it seemed prohibitive at the time. One friend posited that the features the super resolution network finds may be similar regardless of whether the image is a typical life scene or a scientific image.


My Search Results

As a thought experiment, I decided to look up the latest super resolution papers. My first Google search was surprisingly fruitful, as paperswithcode has indexed the latest journal articles on super resolution using natural language processing and internet searches.

Papers with code page on super resolution.

From paperswithcode, it was relatively easy to compare the performance of the models, choose the best one, download the code from Github and test it’s performance. I wanted to see if my performance matched the authors, so I continued looking for the standard reference datasets they used for testing. From that search, it was a bit more difficult to find the datasets. However, I stumbled across some datasets on huggingface.

One researcher has compiled the standard datasets into a fast format, easily downloadable from the internet, pulled the latest codes and reformated them consistently, and tested them himself on the datasets. There, he had set up extremely simple test scripts and also showed that, despite what the literature said, the DLRN model showed better super resolution performance than the HAN model. Therefore, I decided to download his code and datasets (which was set up through PIP) and found I could improve the appearance of images with it easily on my own computer. Additionally, I found that he had this awesome webapp where you can test some of the super resolution neural networks on your own images or standard images. In this case, DLNR wasn’t available, but we the reference image definitely appears clearer with 4x upscaling using the EDSR-base model.

In the next post, I will show how the pretrained DLNR model from Eugene Siow’s hugging face repository worked on my own images.

Leave a Reply