Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why tf tries to occupy so much GPU memory for simple test? #8

Open
vedrusss opened this issue Oct 17, 2018 · 0 comments
Open

Why tf tries to occupy so much GPU memory for simple test? #8

vedrusss opened this issue Oct 17, 2018 · 0 comments

Comments

@vedrusss
Copy link

Hi there,
I'm trying to run your light_head_simple_demo.py with provided model model.ckpt-121397
But while starting the script I run into out of memory issue: while restoring parameters from the checkpoint tf tries to occupy > 14Gb of GPU and falls because I have less than 16Gb.
But the model weight file has size less than 0,5Gb.
Does it really require >16Gb? Or I have somewhere wrong settings for the model? How much memory does it requires for that test?

For test I use tensorflow docker container with pre-installed Ubuntu16, cuda9.0, tensorflow1.6, python 3.5

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant