Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Most efficient way to read large number of file #12

Open
VLucet opened this issue Oct 18, 2022 · 4 comments
Open

Most efficient way to read large number of file #12

VLucet opened this issue Oct 18, 2022 · 4 comments

Comments

@VLucet
Copy link

VLucet commented Oct 18, 2022

Hi ! I have about 2.7 million images I'd like to efficiently read EXIF data from in Julia. ExifViewer seems much faster than ImageMagick for that task. Do you have any advice on how to complete this task most efficiently with ExifViewer?

@ashwanirathee
Copy link
Member

ashwanirathee commented Oct 19, 2022

Hey @VLucet !!

I think the optimization for speed will come from by using keyword_arguments, so if you know which tags you specifically want to read and which IFD they are in, then we use

julia> read_tags(data; ifds =  specificidnum, read_all = false, tags= [ "EXIF....", "EXIF....."])

@VLucet
Copy link
Author

VLucet commented Oct 21, 2022

Okay, thanks! I'll test that.

@ashwanirathee
Copy link
Member

ashwanirathee commented Feb 23, 2023

I think we are likely to gain a exponential speed factor gain after use of SnoopPrecompile, so reopening it till then.

@ashwanirathee
Copy link
Member

With #19, experience for extracting exif will improve by alot, see the PR message to get more info. I'll go over the package again with JET.jl and other tools to find more spaces of improvements and get those fixed soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants