-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving Hog allocations / Making a fast HOG version #52
Comments
I've been thinking about how to restructure the code to best support your suggestion. I've taken some inspiration from Mathematica's API, in particular, the GradientOrientationFilter. Looking at the function reference for the ImageFeatures package I noticed that we have: I proposed that we add #compute gradient
gx = imfilter(img, centered([-1 0 1]))
gy = imfilter(img, centered([-1 0 1]'))
mag = hypot.(gx, gy)
phase = orientation.(gx, gy) and, of course, the equivalent code for multi-channel images. Additionally, we add an The result of this function call will produce what you have called a hogmap. The A separate issue is adding a framework where the user can specify a region of interest in an image, as well as a window size and stride, so that the features are constructed for each window and stride inside the region of interest. We want to do this without recomputing the gradients etc. for each window inside the region of interest. I think we can handle this with the |
I like the approach you propose. I am not sure about the Then the example form the documentation that does the following: for j in 32:10:cols-32
for i in 64:10:rows-64
box = img[i-63:i+64, j-31:j+32]
descriptor[:, 1] = create_descriptor(box, HOG())
predicted_label, s = svmpredict(model, descriptor);
end
end might do something like for j in 32:10:cols-32
for i in 64:10:rows-64
box = [i-63:i+64, j-31:j+32]
descriptor[:, 1] = create_descriptor(orientation_histograms, box, HOG())
predicted_label, s = svmpredict(model, descriptor);
end
end There Let us consider the case that we build the Something similar is what I did here: And it takes |
After looking carefully at the HOG code and comparing the time needed to perform pedestrian detection in an image (I found it too slow) I wanted to propose another version to be as fast as possible (even though it might differ a little bit from the first proposed version in academia). It would be nice to actually benchmark both versions in terms of F-score in a "benchmark task". In any case probably some of the ideas of the proposed version might be reused to rethink some parts of the current HOG version which could yield faster compute times.
After some discussion with @zygmuntszpak on slack I will start outlining the different components needed to implement the standard HOG:
Currently the HOG in the code does this process for a given input image and
HOG()
struct. This has a basic problem faced by users when they want to use the descriptor in a 'big' image to perform object detection. This problem is a mix of redundant computations of histograms (in case there are overlapping windows) as well as a lot of allocations (since for each window there are several arrays that are created: for gradients (in x and y coordinates) for magnitudes and for orientations).Fast HOG version 1
Why skipping 3 and 4 ?
Well, if we do not normalize the histograms, it seems a bit odd to need the blocks since we would end up with the exact same histogram cells copied in different blogs (seems quite a lot of redundant information, when normalized it makes sense since the normalization factor changes the "redundant cells").
I will tell the array made by the histograms a Hogmap. Which might look like this:
Where C_ij corresponds to a histogram with B bins.
Hei but this is not a HOG!
Well it is descriptor made with histograms of oriented gradients. It's just not normalizing different block regions in order to get faster computes. I would like to test if there is a real high penalty in performance. When the original HOG was proposed no one (as far as I am aware) used to grow the train sets online. We could do it to have samples with different illuminations in different regions, allowing the learning algorithm to learn to be invariant under such events without us needing the descriptor to make local normalizations.
The text was updated successfully, but these errors were encountered: