In [0]:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
{Fix these links}
In [0]:
import tensorflow as tf
import numpy as np
{Put all your imports and installs up into a setup section.}
For general instructions on how to write docs for Tensorflow see Writing TensorFlow Documentation.
The tips below are specific to notebooks for tensorflow.
H1 title.H1.__future__ imports.Be consistent about how you save your notebooks, otherwise the JSON-diffs will be a mess.
This notebook has the "Omit code cell output when saving this notebook" option set. GitHub refuses to diff notebooks with large diffs (inline images).
reviewnb.com may help. You can access it using this bookmarklet:
javascript:(function(){ window.open(window.location.toString().replace(/github\.com/, 'app.reviewnb.com').replace(/files$/,"")); })()
To open a GitHub notebook in Colab use the Open in Colab extension (or make a bookmarklet).
The easiest way to edit a notebook in GitHub is to open it with Colab from the branch you want to edit. Then use File --> Save a copy in GitHub, which will save it back to the branch you opened it from.
For PRs it's helpful to post a direct Colab link to the PR head:
In [0]:
#Build the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu', input_shape=(None, 5)),
tf.keras.layers.Dense(3)
])
In [0]:
# Run the model on a single batch of data, and inspect the output.
result = model(tf.constant(np.random.randn(10,5), dtype = tf.float32)).numpy()
print("min:", result.min())
print("max:", result.max())
print("mean:", result.mean())
print("shape:", result.shape)
In [0]:
# Compile the model for training
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True))
Keep examples quick. Use small datasets, or small slices of datasets. You don't need to train to convergence, train until it's obvious it's making progress.
For a large example, don't try to fit all the code in the notebook. Add python files to tensorflow examples, and in the noptebook run: !pip install git+https://github.com/tensorflow/examples
Use the highest level API that gets the job done (unless the goal is to demonstrate the low level API).
Use keras.Sequential > keras functional api > keras model subclassing > ...
Use model.fit > model.train_on_batch > manual GradientTapes.
Use eager-style code.
Use tensorflow_datasets and tf.data where possible.
Avoid compat.v1.