-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Most efficient way to read large number of file #12
Comments
Hey @VLucet !! I think the optimization for speed will come from by using keyword_arguments, so if you know which tags you specifically want to read and which IFD they are in, then we use julia> read_tags(data; ifds = specificidnum, read_all = false, tags= [ "EXIF....", "EXIF....."]) |
Okay, thanks! I'll test that. |
I think we are likely to gain a exponential speed factor gain after use of SnoopPrecompile, so reopening it till then. |
With #19, experience for extracting exif will improve by alot, see the PR message to get more info. I'll go over the package again with JET.jl and other tools to find more spaces of improvements and get those fixed soon. |
Hi ! I have about 2.7 million images I'd like to efficiently read EXIF data from in Julia. ExifViewer seems much faster than ImageMagick for that task. Do you have any advice on how to complete this task most efficiently with ExifViewer?
The text was updated successfully, but these errors were encountered: