DS by DB


Looking at the world through data

Explaining Papers 2: LIME Image Viewer

For this second entry in this series, I’ll be explaining the concepts behind the paper, “Why Should I Trust You?”, Explaining the Predictions of Any Classifier (Direct PDF download link). This is the main paper, submitted on ArXiv, that introduces the Local Interpretable Model-agnostic Explanations, or LIME. I talked about this in brief in a previous blogpost outlining a list of possible desires to understand what our models understand, and now I’d like to expand upon the methods and technicalities of this program. I’ll try to keep it as non-technical as I can, but I may fail in that regard at some points, so I apologize in advance.


Explaining Papers 1: YOLOv1

This is the first in a series of blog posts explaining scientific papers within the data scientist research community. For this introduction, we’ll be taking a look at the popular algorithm YOLOv1 created by FAIR (Facebook AI Research). Unlike object detection models before it, You aim to Only Look Once at any input picture for YOLO to make its classification. Due to this, it is a marked improvement in classification speed^.


Understanding What Our Models Understand: A Cautionary Tale

When I was first introduced to convolutional neural networks, I, like others, was captivated by their ability to abstract much of the learning process away from the surface, i.e. to learn on their own without much input from the model’s master. This isn’t a rare reaction, either; not only has the general public, but students and model creators alike, have expressed this same sentiment in better words than I ever could. CNNs often get compared to things like black boxes seen on airlines, and even to straight up wizardry. As for myself, I’d like to imagine I’ll be called a Tech-Priest one of these days.


Matrices, Confusion, and Confusion Matrices

Multiclass classification problems present issues not just with creating and running a multitude of models, but also with interpreting those models’ results. We see this issue arise time and time again when it comes to black-box models such as convolutional neural networks, etc. We don’t necessarily know how these models produce their results, even though we can talk at length about the activation functions within each neuron.


The Ocarina of Time (Series)

Remember The Legend of Zelda: Ocarina of Time? It’s one of those classic video games that’s very black-and-white. The Hero saves the day by defeating the Big, Bad, Evil Guy, the latter of whom wants nothing more than the destruction of the entire world. It doesn’t get any simpler than that. Light overcomes Dark, Courage overcomes Power, and in the case of this particular game, it happens all while being able to see exactly what happens if you don’t complete your mission.