Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix gradient tape issue in test_gradient_tape_issue.py #20277

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Shivam909058
Copy link

import os
import numpy as np
import tensorflow as tf
from tensorflow import keras

os.environ["KERAS_BACKEND"] = "tensorflow"
tf.random.set_seed(1234)

def create_toy_model():
inputs = keras.Input(shape=(1,))
x = keras.layers.Dense(100, activation="tanh", use_bias=True)(inputs)
x = keras.layers.Dense(1000, activation="tanh", use_bias=True)(x)
x = keras.layers.Dense(10, activation="tanh", use_bias=True)(x)
outputs = keras.layers.Dense(1, activation=None, use_bias=False)(x)
return keras.Model(inputs=inputs, outputs=outputs)

def test_gradient_tape_issue():
model = create_toy_model()

x = np.expand_dims(np.linspace(0, 10, num=20), axis=1)
x = tf.convert_to_tensor(x, dtype=tf.float32)

print(f"Number of layers: {len(model.layers)}")
for i, layer in enumerate(model.layers):
    print(f"Layer {i}: {layer.__class__.__name__}, Weights: {len(layer.weights)}")

with tf.GradientTape(watch_accessed_variables=True) as tape:
    last_layer_weights = model.layers[-1].weights[0]
    #tape.watch(last_layer_weights)
    out = model(x)

dout = tape.gradient(out, last_layer_weights)

assert dout is not None, "Gradient should not be None"
print("Gradient successfully computed!")

if name == "main":
test_gradient_tape_issue()

Copy link

google-cla bot commented Sep 22, 2024

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@codecov-commenter
Copy link

codecov-commenter commented Sep 22, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 78.75%. Comparing base (458ece9) to head (a3e2089).

Additional details and impacted files
@@           Coverage Diff           @@
##           master   #20277   +/-   ##
=======================================
  Coverage   78.75%   78.75%           
=======================================
  Files         510      510           
  Lines       48385    48385           
  Branches     8901     8901           
=======================================
  Hits        38106    38106           
  Misses       8446     8446           
  Partials     1833     1833           
Flag Coverage Δ
keras 78.61% <ø> (ø)
keras-jax 62.15% <ø> (ø)
keras-numpy 57.31% <ø> (+<0.01%) ⬆️
keras-tensorflow 63.50% <ø> (ø)
keras-torch 62.20% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@gbaned
Copy link
Collaborator

gbaned commented Sep 30, 2024

Hi @Shivam909058 Can you please sign CLA? Thank you!

@gbaned
Copy link
Collaborator

gbaned commented Nov 28, 2024

Hi @Shivam909058 Any update on this PR? Please. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants