-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TF Lite convert error #58
Comments
It appears that the problem is related to KerasCV's implementation because they are using their own implementation of the GroupNormalization layer, which is causing this error, whereas this repo's implementation uses the GroupNormalization layer from tensorflow-addons. I was able to save the diffusion model without errors by using this repo.
**EDIT: While saving is possible, converting to TFLite appears to be impossible. It appears that the protobuf has a size limit of 2GB. Because the saved model exceeds 3GB, the conversion returns the following error:
|
Thank you @costiash Another one for using pb model? |
I tried exporting the saved model.
|
I also tried the following:
However when loading with the earlier version
looks like it was not able to load it either:
|
regarding the difference in the errors that you got. you need to add TensorFlow ops and in case it needed also add enable TensorFlow ops:
from the docs: |
Thank you @costiash, I tried this finally and it worked:
I have an output model with 1.6 GB. |
@charbull , any chance you've had time to test this? I just ordered an Edge TPU to try out stable diffusion on (Seemed cheaper than buying a new graphics card). If it works any chance you have a full working code example upstream of generating the tflite file? Also curious what you got for timing and quality of output :) |
Hi @joe-albert, Ended up opening an issue on keras_cv repo here: keras-team/keras-cv#1033 It seems there is a need to convert the three parts separately to tf lite int8 and then merge them to use them on the edge tpu. Let us know if you make progress on your end too :) Cheers |
Just so I can keep following along, your goal is to make a TF Lite model so you can use a TPU? I looked up a TPU and is it basically a "raspberry pi" that can do ML? Any idea of the expected speed? (IE, can return an image in x amount of seconds) |
@soten355 yes, the idea is to use the TPU on the board instead of the CPU. the raspberry pi does not have a TPU. |
@charbull , to add to that-- @soten355 you might have seen the Coral dev board, which is similar to a raspberry pi, with one difference being that it has a TPU integrated into it. TPU stands for Tensor Processing Unit, essentially it is a specialized chip to perform vector and matrix operations used in tensorflow. I have no idea what the expected speed would be unfortunately. Googles TPU cards with 8 M2 chips (I think that's what was used but might be wrong!) have been clocked at 8 images in ~7 seconds . I believe the Coral USB accelerator has 1 M2 module |
I came here wondering if it's worth buying a Coral USB TPU? I saw a similar question at google-coral/edgetpu#621 It seems you're in the middle of testing it? Does it work? |
It's been a while, any progress? |
@SkyyySi sorry for the late reply, is it still on my radar, didnt try things out yet. |
@SkyyySi , I've been busy with other things too and haven't had time to look at this :/ |
@joe-albert Don't worry don't worry, just wanted to check since maybe you found something and just forgot to post. Good look with whatever you're doing. |
Hello @charbull @joe-albert |
Hi,
Leaving this here in case someone is also trying to convert to a TF lite model.
From the keras_cv documentation:
Load the model:
It seems a similar error occurs when trying to save the model.
The error in conversion:
The text was updated successfully, but these errors were encountered: