Keras Sequential Model

Today we're going to look at Keras, which is a high-level language for building neural networks in TensorFlow (or other languages -- but we'll focus on Tensorflow). This notebook will focus on the sequential model, which is just a linear stack of layers.

Usually we don't need to use all of keras so we don't typically see import keras (that would really slow your computer down!). Instead, just import what we need:


In [16]:
from keras.models import Sequential # sequential model
from keras.layers import Dense, Activation # layers
import numpy as np

If your notebook uses a backend other than TensorFlow then check out this link to set it to TensorFlow. If you use a different back end, there are subtle differences that may lead to you getting different results or bugs and that's no fun when starting out!

Defining a Model

Here we'll look at a Sequential model which is just an ordered stack of layers.


In [17]:
help(Sequential)


Help on class Sequential in module keras.models:

class Sequential(keras.engine.training.Model)
 |  Linear stack of layers.
 |  
 |  # Arguments
 |      layers: list of layers to add to the model.
 |  
 |  # Note
 |      The first layer passed to a Sequential model
 |      should have a defined input shape. What that
 |      means is that it should have received an `input_shape`
 |      or `batch_input_shape` argument,
 |      or for some type of layers (recurrent, Dense...)
 |      an `input_dim` argument.
 |  
 |  # Example
 |  
 |      ```python
 |          model = Sequential()
 |          # first layer must have a defined input shape
 |          model.add(Dense(32, input_dim=500))
 |          # afterwards, Keras does automatic shape inference
 |          model.add(Dense(32))
 |  
 |          # also possible (equivalent to the above):
 |          model = Sequential()
 |          model.add(Dense(32, input_shape=(500,)))
 |          model.add(Dense(32))
 |  
 |          # also possible (equivalent to the above):
 |          model = Sequential()
 |          # here the batch dimension is None,
 |          # which means any batch size will be accepted by the model.
 |          model.add(Dense(32, batch_input_shape=(None, 500)))
 |          model.add(Dense(32))
 |      ```
 |  
 |  Method resolution order:
 |      Sequential
 |      keras.engine.training.Model
 |      keras.engine.topology.Container
 |      keras.engine.topology.Layer
 |      builtins.object
 |  
 |  Methods defined here:
 |  
 |  __init__(self, layers=None, name=None)
 |      Initialize self.  See help(type(self)) for accurate signature.
 |  
 |  add(self, layer)
 |      Adds a layer instance on top of the layer stack.
 |      
 |      # Arguments
 |          layer: layer instance.
 |      
 |      # Raises
 |          TypeError: If `layer` is not a layer instance.
 |          ValueError: In case the `layer` argument does not
 |              know its input shape.
 |          ValueError: In case the `layer` argument has
 |              multiple output tensors, or is already connected
 |              somewhere else (forbidden in `Sequential` models).
 |  
 |  build(self, input_shape=None)
 |      Creates the layer weights.
 |      
 |      Must be implemented on all layers that have weights.
 |      
 |      # Arguments
 |          input_shape: Keras tensor (future input to layer)
 |              or list/tuple of Keras tensors to reference
 |              for weight shape computations.
 |  
 |  call(self, inputs, mask=None)
 |      Call the model on new inputs.
 |      
 |      In this case `call` just reapplies
 |      all ops in the graph to the new inputs
 |      (e.g. build a new computational graph from the provided inputs).
 |      
 |      A model is callable on non-Keras tensors.
 |      
 |      # Arguments
 |          inputs: A tensor or list of tensors.
 |          mask: A mask or list of masks. A mask can be
 |              either a tensor or None (no mask).
 |      
 |      # Returns
 |          A tensor if there is a single output, or
 |          a list of tensors if there are more than one outputs.
 |  
 |  compile(self, optimizer, loss, metrics=None, sample_weight_mode=None, weighted_metrics=None, **kwargs)
 |      Configures the learning process.
 |      
 |      # Arguments
 |          optimizer: str (name of optimizer) or optimizer object.
 |              See [optimizers](/optimizers).
 |          loss: str (name of objective function) or objective function.
 |              See [losses](/losses).
 |          metrics: list of metrics to be evaluated by the model
 |              during training and testing.
 |              Typically you will use `metrics=['accuracy']`.
 |              See [metrics](/metrics).
 |          sample_weight_mode: if you need to do timestep-wise
 |              sample weighting (2D weights), set this to "temporal".
 |              "None" defaults to sample-wise weights (1D).
 |          weighted_metrics: list of metrics to be evaluated and weighted
 |              by sample_weight or class_weight during training and testing
 |          **kwargs: for Theano/CNTK backends, these are passed into
 |              K.function. When using the TensorFlow backend, these are
 |              passed into `tf.Session.run`.
 |      
 |      # Example
 |          ```python
 |              model = Sequential()
 |              model.add(Dense(32, input_shape=(500,)))
 |              model.add(Dense(10, activation='softmax'))
 |              model.compile(optimizer='rmsprop',
 |                            loss='categorical_crossentropy',
 |                            metrics=['accuracy'])
 |          ```
 |  
 |  evaluate(self, x, y, batch_size=32, verbose=1, sample_weight=None)
 |      Computes the loss on some input data, batch by batch.
 |      
 |      # Arguments
 |          x: input data, as a Numpy array or list of Numpy arrays
 |              (if the model has multiple inputs).
 |          y: labels, as a Numpy array.
 |          batch_size: integer. Number of samples per gradient update.
 |          verbose: verbosity mode, 0 or 1.
 |          sample_weight: sample weights, as a Numpy array.
 |      
 |      # Returns
 |          Scalar test loss (if the model has no metrics)
 |          or list of scalars (if the model computes other metrics).
 |          The attribute `model.metrics_names` will give you
 |          the display labels for the scalar outputs.
 |      
 |      # Raises
 |          RuntimeError: if the model was never compiled.
 |  
 |  evaluate_generator(self, generator, steps, max_queue_size=10, workers=1, use_multiprocessing=False)
 |      Evaluates the model on a data generator.
 |      
 |      The generator should return the same kind of data
 |      as accepted by `test_on_batch`.
 |      
 |      # Arguments
 |          generator: Generator yielding tuples (inputs, targets)
 |              or (inputs, targets, sample_weights)
 |          steps: Total number of steps (batches of samples)
 |              to yield from `generator` before stopping.
 |          max_queue_size: maximum size for the generator queue
 |          workers: maximum number of processes to spin up
 |          use_multiprocessing: if True, use process based threading.
 |              Note that because this implementation
 |              relies on multiprocessing, you should not pass
 |              non picklable arguments to the generator
 |              as they can't be passed easily to children processes.
 |      
 |      # Returns
 |          Scalar test loss (if the model has no metrics)
 |          or list of scalars (if the model computes other metrics).
 |          The attribute `model.metrics_names` will give you
 |          the display labels for the scalar outputs.
 |      
 |      # Raises
 |          RuntimeError: if the model was never compiled.
 |  
 |  fit(self, x, y, batch_size=32, epochs=10, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, **kwargs)
 |      Trains the model for a fixed number of epochs.
 |      
 |      # Arguments
 |          x: input data, as a Numpy array or list of Numpy arrays
 |              (if the model has multiple inputs).
 |          y: labels, as a Numpy array.
 |          batch_size: integer. Number of samples per gradient update.
 |          epochs: integer, the number of epochs to train the model.
 |          verbose: 0 for no logging to stdout,
 |              1 for progress bar logging, 2 for one log line per epoch.
 |          callbacks: list of `keras.callbacks.Callback` instances.
 |              List of callbacks to apply during training.
 |              See [callbacks](/callbacks).
 |          validation_split: float (0. < x < 1).
 |              Fraction of the data to use as held-out validation data.
 |          validation_data: tuple (x_val, y_val) or tuple
 |              (x_val, y_val, val_sample_weights) to be used as held-out
 |              validation data. Will override validation_split.
 |          shuffle: boolean or str (for 'batch').
 |              Whether to shuffle the samples at each epoch.
 |              'batch' is a special option for dealing with the
 |              limitations of HDF5 data; it shuffles in batch-sized chunks.
 |          class_weight: dictionary mapping classes to a weight value,
 |              used for scaling the loss function (during training only).
 |          sample_weight: Numpy array of weights for
 |              the training samples, used for scaling the loss function
 |              (during training only). You can either pass a flat (1D)
 |              Numpy array with the same length as the input samples
 |              (1:1 mapping between weights and samples),
 |              or in the case of temporal data,
 |              you can pass a 2D array with shape (samples, sequence_length),
 |              to apply a different weight to every timestep of every sample.
 |              In this case you should make sure to specify
 |              sample_weight_mode="temporal" in compile().
 |          initial_epoch: epoch at which to start training
 |              (useful for resuming a previous training run)
 |      
 |      # Returns
 |          A `History` object. Its `History.history` attribute is
 |          a record of training loss values and metrics values
 |          at successive epochs, as well as validation loss values
 |          and validation metrics values (if applicable).
 |      
 |      # Raises
 |          RuntimeError: if the model was never compiled.
 |  
 |  fit_generator(self, generator, steps_per_epoch, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, initial_epoch=0)
 |      Fits the model on data generated batch-by-batch by a Python generator.
 |      
 |      The generator is run in parallel to the model, for efficiency.
 |      For instance, this allows you to do real-time data augmentation
 |      on images on CPU in parallel to training your model on GPU.
 |      
 |      # Arguments
 |          generator: A generator.
 |              The output of the generator must be either
 |              - a tuple (inputs, targets)
 |              - a tuple (inputs, targets, sample_weights).
 |              All arrays should contain the same number of samples.
 |              The generator is expected to loop over its data
 |              indefinitely. An epoch finishes when `steps_per_epoch`
 |              batches have been seen by the model.
 |          steps_per_epoch: Total number of steps (batches of samples)
 |              to yield from `generator` before declaring one epoch
 |              finished and starting the next epoch. It should typically
 |              be equal to the number of unique samples of your dataset
 |              divided by the batch size.
 |          epochs: Integer, total number of iterations on the data.
 |          verbose: Verbosity mode, 0, 1, or 2.
 |          callbacks: List of callbacks to be called during training.
 |          validation_data: This can be either
 |              - A generator for the validation data
 |              - A tuple (inputs, targets)
 |              - A tuple (inputs, targets, sample_weights).
 |          validation_steps: Only relevant if `validation_data`
 |              is a generator.
 |              Number of steps to yield from validation generator
 |              at the end of every epoch. It should typically
 |              be equal to the number of unique samples of your
 |              validation dataset divided by the batch size.
 |          class_weight: Dictionary mapping class indices to a weight
 |              for the class.
 |          max_queue_size: Maximum size for the generator queue
 |          workers: Maximum number of processes to spin up
 |          use_multiprocessing: if True, use process based threading.
 |              Note that because
 |              this implementation relies on multiprocessing,
 |              you should not pass
 |              non picklable arguments to the generator
 |              as they can't be passed
 |              easily to children processes.
 |          initial_epoch: Epoch at which to start training
 |              (useful for resuming a previous training run)
 |      
 |      # Returns
 |          A `History` object.
 |      
 |      # Raises
 |          RuntimeError: if the model was never compiled.
 |      
 |      # Example
 |      
 |      ```python
 |          def generate_arrays_from_file(path):
 |              while 1:
 |                  f = open(path)
 |                  for line in f:
 |                      # create Numpy arrays of input data
 |                      # and labels, from each line in the file
 |                      x, y = process_line(line)
 |                      yield (x, y)
 |                  f.close()
 |      
 |          model.fit_generator(generate_arrays_from_file('/my_file.txt'),
 |                              steps_per_epoch=1000, epochs=10)
 |      ```
 |  
 |  get_config(self)
 |      Returns the config of the layer.
 |      
 |      A layer config is a Python dictionary (serializable)
 |      containing the configuration of a layer.
 |      The same layer can be reinstantiated later
 |      (without its trained weights) from this configuration.
 |      
 |      The config of a layer does not include connectivity
 |      information, nor the layer class name. These are handled
 |      by `Container` (one layer of abstraction above).
 |      
 |      # Returns
 |          Python dictionary.
 |  
 |  get_layer(self, name=None, index=None)
 |      Retrieve a layer that is part of the model.
 |      
 |      Returns a layer based on either its name (unique)
 |      or its index in the graph. Indices are based on
 |      order of horizontal graph traversal (bottom-up).
 |      
 |      # Arguments
 |          name: string, name of layer.
 |          index: integer, index of layer.
 |      
 |      # Returns
 |          A layer instance.
 |  
 |  get_losses_for(self, inputs)
 |  
 |  get_updates_for(self, inputs)
 |  
 |  get_weights(self)
 |      Retrieves the weights of the model.
 |      
 |      # Returns
 |          A flat list of Numpy arrays
 |          (one array per model weight).
 |  
 |  legacy_get_config(self)
 |      Retrieves the model configuration as a Python list.
 |      
 |      # Returns
 |          A list of dicts (each dict is a layer config).
 |  
 |  load_weights(self, filepath, by_name=False)
 |      Loads all layer weights from a HDF5 save file.
 |      
 |      If `by_name` is False (default) weights are loaded
 |      based on the network's topology, meaning the architecture
 |      should be the same as when the weights were saved.
 |      Note that layers that don't have weights are not taken
 |      into account in the topological ordering, so adding or
 |      removing layers is fine as long as they don't have weights.
 |      
 |      If `by_name` is True, weights are loaded into layers
 |      only if they share the same name. This is useful
 |      for fine-tuning or transfer-learning models where
 |      some of the layers have changed.
 |      
 |      # Arguments
 |          filepath: String, path to the weights file to load.
 |          by_name: Boolean, whether to load weights by name
 |              or by topological order.
 |      
 |      # Raises
 |          ImportError: If h5py is not available.
 |  
 |  pop(self)
 |      Removes the last layer in the model.
 |      
 |      # Raises
 |          TypeError: if there are no layers in the model.
 |  
 |  predict(self, x, batch_size=32, verbose=0)
 |      Generates output predictions for the input samples.
 |      
 |      The input samples are processed batch by batch.
 |      
 |      # Arguments
 |          x: the input data, as a Numpy array.
 |          batch_size: integer.
 |          verbose: verbosity mode, 0 or 1.
 |      
 |      # Returns
 |          A Numpy array of predictions.
 |  
 |  predict_classes(self, x, batch_size=32, verbose=1)
 |      Generate class predictions for the input samples.
 |      
 |      The input samples are processed batch by batch.
 |      
 |      # Arguments
 |          x: input data, as a Numpy array or list of Numpy arrays
 |              (if the model has multiple inputs).
 |          batch_size: integer.
 |          verbose: verbosity mode, 0 or 1.
 |      
 |      # Returns
 |          A numpy array of class predictions.
 |  
 |  predict_generator(self, generator, steps, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)
 |      Generates predictions for the input samples from a data generator.
 |      
 |      The generator should return the same kind of data as accepted by
 |      `predict_on_batch`.
 |      
 |      # Arguments
 |          generator: generator yielding batches of input samples.
 |          steps: Total number of steps (batches of samples)
 |              to yield from `generator` before stopping.
 |          max_queue_size: maximum size for the generator queue
 |          workers: maximum number of processes to spin up
 |          use_multiprocessing: if True, use process based threading.
 |              Note that because this implementation
 |              relies on multiprocessing, you should not pass
 |              non picklable arguments to the generator
 |              as they can't be passed easily to children processes.
 |          verbose: verbosity mode, 0 or 1.
 |      
 |      # Returns
 |          A Numpy array of predictions.
 |  
 |  predict_on_batch(self, x)
 |      Returns predictions for a single batch of samples.
 |      
 |      # Arguments
 |          x: input data, as a Numpy array or list of Numpy arrays
 |              (if the model has multiple inputs).
 |      
 |      # Returns
 |          A Numpy array of predictions.
 |  
 |  predict_proba(self, x, batch_size=32, verbose=1)
 |      Generates class probability predictions for the input samples.
 |      
 |      The input samples are processed batch by batch.
 |      
 |      # Arguments
 |          x: input data, as a Numpy array or list of Numpy arrays
 |              (if the model has multiple inputs).
 |          batch_size: integer.
 |          verbose: verbosity mode, 0 or 1.
 |      
 |      # Returns
 |          A Numpy array of probability predictions.
 |  
 |  save_weights(self, filepath, overwrite=True)
 |      Dumps all layer weights to a HDF5 file.
 |      
 |      The weight file has:
 |          - `layer_names` (attribute), a list of strings
 |              (ordered names of model layers).
 |          - For every layer, a `group` named `layer.name`
 |              - For every such layer group, a group attribute `weight_names`,
 |                  a list of strings
 |                  (ordered names of weights tensor of the layer).
 |              - For every weight in the layer, a dataset
 |                  storing the weight value, named after the weight tensor.
 |      
 |      # Arguments
 |          filepath: String, path to the file to save the weights to.
 |          overwrite: Whether to silently overwrite any existing file at the
 |              target location, or provide the user with a manual prompt.
 |      
 |      # Raises
 |          ImportError: If h5py is not available.
 |  
 |  set_weights(self, weights)
 |      Sets the weights of the model.
 |      
 |      # Arguments
 |          weights: Should be a list
 |              of Numpy arrays with shapes and types matching
 |              the output of `model.get_weights()`.
 |  
 |  test_on_batch(self, x, y, sample_weight=None)
 |      Evaluates the model over a single batch of samples.
 |      
 |      # Arguments
 |          x: input data, as a Numpy array or list of Numpy arrays
 |              (if the model has multiple inputs).
 |          y: labels, as a Numpy array.
 |          sample_weight: sample weights, as a Numpy array.
 |      
 |      # Returns
 |          Scalar test loss (if the model has no metrics)
 |          or list of scalars (if the model computes other metrics).
 |          The attribute `model.metrics_names` will give you
 |          the display labels for the scalar outputs.
 |      
 |      # Raises
 |          RuntimeError: if the model was never compiled.
 |  
 |  train_on_batch(self, x, y, class_weight=None, sample_weight=None)
 |      Single gradient update over one batch of samples.
 |      
 |      # Arguments
 |          x: input data, as a Numpy array or list of Numpy arrays
 |              (if the model has multiple inputs).
 |          y: labels, as a Numpy array.
 |          class_weight: dictionary mapping classes to a weight value,
 |              used for scaling the loss function (during training only).
 |          sample_weight: sample weights, as a Numpy array.
 |      
 |      # Returns
 |          Scalar training loss (if the model has no metrics)
 |          or list of scalars (if the model computes other metrics).
 |          The attribute `model.metrics_names` will give you
 |          the display labels for the scalar outputs.
 |      
 |      # Raises
 |          RuntimeError: if the model was never compiled.
 |  
 |  ----------------------------------------------------------------------
 |  Class methods defined here:
 |  
 |  from_config(config, custom_objects=None) from builtins.type
 |      Instantiates a Model from its config (output of `get_config()`).
 |      
 |      # Arguments
 |          config: Model config dictionary.
 |          custom_objects: Optional dictionary mapping names
 |              (strings) to custom classes or functions to be
 |              considered during deserialization.
 |      
 |      # Returns
 |          A model instance.
 |      
 |      # Raises
 |          ValueError: In case of improperly formatted config dict.
 |  
 |  legacy_from_config(config, layer_cache=None) from builtins.type
 |      Load a model from a legacy configuration.
 |      
 |      # Arguments
 |          config: dictionary with configuration.
 |          layer_cache: cache to draw pre-existing layer.
 |      
 |      # Returns
 |          The loaded Model.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors defined here:
 |  
 |  losses
 |      Retrieve the model's losses.
 |      
 |      Will only include losses that are either
 |      inconditional, or conditional on inputs to this model
 |      (e.g. will not include losses that depend on tensors
 |      that aren't inputs to this model).
 |      
 |      # Returns
 |          A list of loss tensors.
 |  
 |  non_trainable_weights
 |  
 |  regularizers
 |  
 |  state_updates
 |      Returns the `updates` from all layers that are stateful.
 |      
 |      This is useful for separating training updates and
 |      state updates, e.g. when we need to update a layer's internal state
 |      during prediction.
 |      
 |      # Returns
 |          A list of update ops.
 |  
 |  trainable
 |  
 |  trainable_weights
 |  
 |  updates
 |      Retrieve the model's updates.
 |      
 |      Will only include updates that are either
 |      inconditional, or conditional on inputs to this model
 |      (e.g. will not include updates that depend on tensors
 |      that aren't inputs to this model).
 |      
 |      # Returns
 |          A list of update ops.
 |  
 |  uses_learning_phase
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from keras.engine.topology.Container:
 |  
 |  compute_mask(self, inputs, mask)
 |      Computes an output mask tensor.
 |      
 |      # Arguments
 |          inputs: Tensor or list of tensors.
 |          mask: Tensor or list of tensors.
 |      
 |      # Returns
 |          None or a tensor (or list of tensors,
 |              one per output tensor of the layer).
 |  
 |  compute_output_shape(self, input_shape)
 |      Computes the output shape of the layer.
 |      
 |      Assumes that the layer will be built
 |      to match that input shape provided.
 |      
 |      # Arguments
 |          input_shape: Shape tuple (tuple of integers)
 |              or list of shape tuples (one per output tensor of the layer).
 |              Shape tuples can include None for free dimensions,
 |              instead of an integer.
 |      
 |      # Returns
 |          An input shape tuple.
 |  
 |  reset_states(self)
 |  
 |  run_internal_graph(self, inputs, masks=None)
 |      Computes output tensors for new inputs.
 |      
 |      # Note:
 |          - Expects `inputs` to be a list (potentially with 1 element).
 |          - Can be run on non-Keras tensors.
 |      
 |      # Arguments
 |          inputs: List of tensors
 |          masks: List of masks (tensors or None).
 |      
 |      # Returns
 |          Three lists: output_tensors, output_masks, output_shapes
 |  
 |  save(self, filepath, overwrite=True, include_optimizer=True)
 |      Save the model to a single HDF5 file.
 |      
 |      The savefile includes:
 |          - The model architecture, allowing to re-instantiate the model.
 |          - The model weights.
 |          - The state of the optimizer, allowing to resume training
 |              exactly where you left off.
 |      
 |      This allows you to save the entirety of the state of a model
 |      in a single file.
 |      
 |      Saved models can be reinstantiated via `keras.models.load_model`.
 |      The model returned by `load_model`
 |      is a compiled model ready to be used (unless the saved model
 |      was never compiled in the first place).
 |      
 |      # Arguments
 |          filepath: String, path to the file to save the weights to.
 |          overwrite: Whether to silently overwrite any existing file at the
 |              target location, or provide the user with a manual prompt.
 |          include_optimizer: If True, save optimizer's state together.
 |      
 |      # Example
 |      
 |      ```python
 |      from keras.models import load_model
 |      
 |      model.save('my_model.h5')  # creates a HDF5 file 'my_model.h5'
 |      del model  # deletes the existing model
 |      
 |      # returns a compiled model
 |      # identical to the previous one
 |      model = load_model('my_model.h5')
 |      ```
 |  
 |  summary(self, line_length=None, positions=None, print_fn=<built-in function print>)
 |      Prints a string summary of the network.
 |      
 |      # Arguments
 |          line_length: Total length of printed lines
 |              (e.g. set this to adapt the display to different
 |              terminal window sizes).
 |          positions: Relative or absolute positions of log elements
 |              in each line. If not provided,
 |              defaults to `[.33, .55, .67, 1.]`.
 |          print_fn: Print function to use.
 |              It will be called on each line of the summary.
 |              You can set it to a custom function
 |              in order to capture the string summary.
 |  
 |  to_json(self, **kwargs)
 |      Returns a JSON string containing the network configuration.
 |      
 |      To load a network from a JSON save file, use
 |      `keras.models.model_from_json(json_string, custom_objects={})`.
 |      
 |      # Arguments
 |          **kwargs: Additional keyword arguments
 |              to be passed to `json.dumps()`.
 |      
 |      # Returns
 |          A JSON string.
 |  
 |  to_yaml(self, **kwargs)
 |      Returns a yaml string containing the network configuration.
 |      
 |      To load a network from a yaml save file, use
 |      `keras.models.model_from_yaml(yaml_string, custom_objects={})`.
 |      
 |      `custom_objects` should be a dictionary mapping
 |      the names of custom losses / layers / etc to the corresponding
 |      functions / classes.
 |      
 |      # Arguments
 |          **kwargs: Additional keyword arguments
 |              to be passed to `yaml.dump()`.
 |      
 |      # Returns
 |          A YAML string.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from keras.engine.topology.Container:
 |  
 |  input_spec
 |      Gets the model's input specs.
 |      
 |      # Returns
 |          A list of `InputSpec` instances (one per input to the model)
 |              or a single instance if the model has only one input.
 |  
 |  stateful
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from keras.engine.topology.Layer:
 |  
 |  __call__(self, inputs, **kwargs)
 |      Wrapper around self.call(), for handling internal references.
 |      
 |      If a Keras tensor is passed:
 |          - We call self._add_inbound_node().
 |          - If necessary, we `build` the layer to match
 |              the _keras_shape of the input(s).
 |          - We update the _keras_shape of every input tensor with
 |              its new shape (obtained via self.compute_output_shape).
 |              This is done as part of _add_inbound_node().
 |          - We update the _keras_history of the output tensor(s)
 |              with the current layer.
 |              This is done as part of _add_inbound_node().
 |      
 |      # Arguments
 |          inputs: Can be a tensor or list/tuple of tensors.
 |          **kwargs: Additional keyword arguments to be passed to `call()`.
 |      
 |      # Returns
 |          Output of the layer's `call` method.
 |      
 |      # Raises
 |          ValueError: in case the layer is missing shape information
 |              for its `build` call.
 |  
 |  add_loss(self, losses, inputs=None)
 |      Add losses to the layer.
 |      
 |      The loss may potentially be conditional on some inputs tensors,
 |      for instance activity losses are conditional on the layer's inputs.
 |      
 |      # Arguments
 |          losses: loss tensor or list of loss tensors
 |              to add to the layer.
 |          inputs: input tensor or list of inputs tensors to mark
 |              the losses as conditional on these inputs.
 |              If None is passed, the loss is assumed unconditional
 |              (e.g. L2 weight regularization, which only depends
 |              on the layer's weights variables, not on any inputs tensors).
 |  
 |  add_update(self, updates, inputs=None)
 |      Add updates to the layer.
 |      
 |      The updates may potentially be conditional on some inputs tensors,
 |      for instance batch norm updates are conditional on the layer's inputs.
 |      
 |      # Arguments
 |          updates: update op or list of update ops
 |              to add to the layer.
 |          inputs: input tensor or list of inputs tensors to mark
 |              the updates as conditional on these inputs.
 |              If None is passed, the updates are assumed unconditional.
 |  
 |  add_weight(self, name, shape, dtype=None, initializer=None, regularizer=None, trainable=True, constraint=None)
 |      Adds a weight variable to the layer.
 |      
 |      # Arguments
 |          name: String, the name for the weight variable.
 |          shape: The shape tuple of the weight.
 |          dtype: The dtype of the weight.
 |          initializer: An Initializer instance (callable).
 |          regularizer: An optional Regularizer instance.
 |          trainable: A boolean, whether the weight should
 |              be trained via backprop or not (assuming
 |              that the layer itself is also trainable).
 |          constraint: An optional Constraint instance.
 |      
 |      # Returns
 |          The created weight variable.
 |  
 |  assert_input_compatibility(self, inputs)
 |      Checks compatibility between the layer and provided inputs.
 |      
 |      This checks that the tensor(s) `input`
 |      verify the input assumptions of the layer
 |      (if any). If not, exceptions are raised.
 |      
 |      # Arguments
 |          inputs: input tensor or list of input tensors.
 |      
 |      # Raises
 |          ValueError: in case of mismatch between
 |              the provided inputs and the expectations of the layer.
 |  
 |  count_params(self)
 |      Count the total number of scalars composing the weights.
 |      
 |      # Returns
 |          An integer count.
 |      
 |      # Raises
 |          RuntimeError: if the layer isn't yet built
 |              (in which case its weights aren't yet defined).
 |  
 |  get_input_at(self, node_index)
 |      Retrieves the input tensor(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A tensor (or list of tensors if the layer has multiple inputs).
 |  
 |  get_input_mask_at(self, node_index)
 |      Retrieves the input mask tensor(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A mask tensor
 |          (or list of tensors if the layer has multiple inputs).
 |  
 |  get_input_shape_at(self, node_index)
 |      Retrieves the input shape(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A shape tuple
 |          (or list of shape tuples if the layer has multiple inputs).
 |  
 |  get_output_at(self, node_index)
 |      Retrieves the output tensor(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A tensor (or list of tensors if the layer has multiple outputs).
 |  
 |  get_output_mask_at(self, node_index)
 |      Retrieves the output mask tensor(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A mask tensor
 |          (or list of tensors if the layer has multiple outputs).
 |  
 |  get_output_shape_at(self, node_index)
 |      Retrieves the output shape(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A shape tuple
 |          (or list of shape tuples if the layer has multiple outputs).
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from keras.engine.topology.Layer:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)
 |  
 |  built
 |  
 |  input
 |      Retrieves the input tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      # Returns
 |          Input tensor or list of input tensors.
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  input_mask
 |      Retrieves the input mask tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      # Returns
 |          Input mask tensor (potentially None) or list of input
 |          mask tensors.
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  input_shape
 |      Retrieves the input shape tuple(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      # Returns
 |          Input shape tuple
 |          (or list of input shape tuples, one tuple per input tensor).
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  output
 |      Retrieves the output tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      # Returns
 |          Output tensor or list of output tensors.
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  output_mask
 |      Retrieves the output mask tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      # Returns
 |          Output mask tensor (potentially None) or list of output
 |          mask tensors.
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  output_shape
 |      Retrieves the output shape tuple(s) of a layer.
 |      
 |      Only applicable if the layer has one inbound node,
 |      or if all inbound nodes have the same output shape.
 |      
 |      # Returns
 |          Output shape tuple
 |          (or list of input shape tuples, one tuple per output tensor).
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  weights

One way to specify a model is with a list (using []) of layers. NB:

The first layer passed to a Sequential model | should have a defined input shape. What that | means is that it should have received an input_shape | or batch_input_shape argument, | or for some type of layers (recurrent, Dense...) | an input_dim argument.

Like TensorFlow, keras separates out the model definition and model running. So defining the model takes essentially no time.


In [18]:
layers = [
    Dense(32, input_shape=(784,)),
    Activation('relu'),
    Dense(10),
    Activation('softmax'),
]
model = Sequential(layers)

Let's visualize this simple model that we've created! NB: for some reason the graphviz package is a pain to install correctly so you may not be able to run this. Keras's own plotting functions just call it through a backend and don't check that it's installed. See here for installing graphviz


In [19]:
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot

SVG(model_to_dot(model).create(prog='dot', format='svg'))


Out[19]:
G 4868480360 dense_9_input: InputLayer 4855508496 dense_9: Dense 4868480360->4855508496 4855507488 activation_5: Activation 4855508496->4855507488 4855507824 dense_10: Dense 4855507488->4855507824 4855506200 activation_6: Activation 4855507824->4855506200

What have we done? We have an input layer (whose shape we have specified) which gets passed to a "Dense-32" layer (more in a second). Then there is a "relu" activation layer followed by a "Dense-10" later and another "softmax" activation layer. Before we talk about these layers let's see an alternative way to write the same model


In [20]:
model = Sequential()
model.add(Dense(32, input_dim=784))
model.add(Activation('relu'))
model.add(Dense(10))
model.add(Activation('softmax'))
SVG(model_to_dot(model).create(prog='dot', format='svg'))


Out[20]:
G 4869167200 dense_11_input: InputLayer 4869166864 dense_11: Dense 4869167200->4869166864 4869237672 activation_7: Activation 4869166864->4869237672 4869413632 dense_12: Dense 4869237672->4869413632 4869414416 activation_8: Activation 4869413632->4869414416

So we can see that other than the names (which are generated from scratch -- don't read much into them) we have the same thing. Thus we can create models by:

  • creating a list of layers and passing this as an argument to Sequential()
  • doing an empty run of Sequential() and then using model.add()
  • some combination of both. You can get fancy...

About Layers

OK cool -- we can define a model. But what are those layers?


In [21]:
help(Dense)


Help on class Dense in module keras.layers.core:

class Dense(keras.engine.topology.Layer)
 |  Just your regular densely-connected NN layer.
 |  
 |  `Dense` implements the operation:
 |  `output = activation(dot(input, kernel) + bias)`
 |  where `activation` is the element-wise activation function
 |  passed as the `activation` argument, `kernel` is a weights matrix
 |  created by the layer, and `bias` is a bias vector created by the layer
 |  (only applicable if `use_bias` is `True`).
 |  
 |  Note: if the input to the layer has a rank greater than 2, then
 |  it is flattened prior to the initial dot product with `kernel`.
 |  
 |  # Example
 |  
 |  ```python
 |      # as first layer in a sequential model:
 |      model = Sequential()
 |      model.add(Dense(32, input_shape=(16,)))
 |      # now the model will take as input arrays of shape (*, 16)
 |      # and output arrays of shape (*, 32)
 |  
 |      # after the first layer, you don't need to specify
 |      # the size of the input anymore:
 |      model.add(Dense(32))
 |  ```
 |  
 |  # Arguments
 |      units: Positive integer, dimensionality of the output space.
 |      activation: Activation function to use
 |          (see [activations](../activations.md)).
 |          If you don't specify anything, no activation is applied
 |          (ie. "linear" activation: `a(x) = x`).
 |      use_bias: Boolean, whether the layer uses a bias vector.
 |      kernel_initializer: Initializer for the `kernel` weights matrix
 |          (see [initializers](../initializers.md)).
 |      bias_initializer: Initializer for the bias vector
 |          (see [initializers](../initializers.md)).
 |      kernel_regularizer: Regularizer function applied to
 |          the `kernel` weights matrix
 |          (see [regularizer](../regularizers.md)).
 |      bias_regularizer: Regularizer function applied to the bias vector
 |          (see [regularizer](../regularizers.md)).
 |      activity_regularizer: Regularizer function applied to
 |          the output of the layer (its "activation").
 |          (see [regularizer](../regularizers.md)).
 |      kernel_constraint: Constraint function applied to
 |          the `kernel` weights matrix
 |          (see [constraints](../constraints.md)).
 |      bias_constraint: Constraint function applied to the bias vector
 |          (see [constraints](../constraints.md)).
 |  
 |  # Input shape
 |      nD tensor with shape: `(batch_size, ..., input_dim)`.
 |      The most common situation would be
 |      a 2D input with shape `(batch_size, input_dim)`.
 |  
 |  # Output shape
 |      nD tensor with shape: `(batch_size, ..., units)`.
 |      For instance, for a 2D input with shape `(batch_size, input_dim)`,
 |      the output would have shape `(batch_size, units)`.
 |  
 |  Method resolution order:
 |      Dense
 |      keras.engine.topology.Layer
 |      builtins.object
 |  
 |  Methods defined here:
 |  
 |  __init__(self, units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs)
 |      Initialize self.  See help(type(self)) for accurate signature.
 |  
 |  build(self, input_shape)
 |      Creates the layer weights.
 |      
 |      Must be implemented on all layers that have weights.
 |      
 |      # Arguments
 |          input_shape: Keras tensor (future input to layer)
 |              or list/tuple of Keras tensors to reference
 |              for weight shape computations.
 |  
 |  call(self, inputs)
 |      This is where the layer's logic lives.
 |      
 |      # Arguments
 |          inputs: Input tensor, or list/tuple of input tensors.
 |          **kwargs: Additional keyword arguments.
 |      
 |      # Returns
 |          A tensor or list/tuple of tensors.
 |  
 |  compute_output_shape(self, input_shape)
 |      Computes the output shape of the layer.
 |      
 |      Assumes that the layer will be built
 |      to match that input shape provided.
 |      
 |      # Arguments
 |          input_shape: Shape tuple (tuple of integers)
 |              or list of shape tuples (one per output tensor of the layer).
 |              Shape tuples can include None for free dimensions,
 |              instead of an integer.
 |      
 |      # Returns
 |          An input shape tuple.
 |  
 |  get_config(self)
 |      Returns the config of the layer.
 |      
 |      A layer config is a Python dictionary (serializable)
 |      containing the configuration of a layer.
 |      The same layer can be reinstantiated later
 |      (without its trained weights) from this configuration.
 |      
 |      The config of a layer does not include connectivity
 |      information, nor the layer class name. These are handled
 |      by `Container` (one layer of abstraction above).
 |      
 |      # Returns
 |          Python dictionary.
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from keras.engine.topology.Layer:
 |  
 |  __call__(self, inputs, **kwargs)
 |      Wrapper around self.call(), for handling internal references.
 |      
 |      If a Keras tensor is passed:
 |          - We call self._add_inbound_node().
 |          - If necessary, we `build` the layer to match
 |              the _keras_shape of the input(s).
 |          - We update the _keras_shape of every input tensor with
 |              its new shape (obtained via self.compute_output_shape).
 |              This is done as part of _add_inbound_node().
 |          - We update the _keras_history of the output tensor(s)
 |              with the current layer.
 |              This is done as part of _add_inbound_node().
 |      
 |      # Arguments
 |          inputs: Can be a tensor or list/tuple of tensors.
 |          **kwargs: Additional keyword arguments to be passed to `call()`.
 |      
 |      # Returns
 |          Output of the layer's `call` method.
 |      
 |      # Raises
 |          ValueError: in case the layer is missing shape information
 |              for its `build` call.
 |  
 |  add_loss(self, losses, inputs=None)
 |      Add losses to the layer.
 |      
 |      The loss may potentially be conditional on some inputs tensors,
 |      for instance activity losses are conditional on the layer's inputs.
 |      
 |      # Arguments
 |          losses: loss tensor or list of loss tensors
 |              to add to the layer.
 |          inputs: input tensor or list of inputs tensors to mark
 |              the losses as conditional on these inputs.
 |              If None is passed, the loss is assumed unconditional
 |              (e.g. L2 weight regularization, which only depends
 |              on the layer's weights variables, not on any inputs tensors).
 |  
 |  add_update(self, updates, inputs=None)
 |      Add updates to the layer.
 |      
 |      The updates may potentially be conditional on some inputs tensors,
 |      for instance batch norm updates are conditional on the layer's inputs.
 |      
 |      # Arguments
 |          updates: update op or list of update ops
 |              to add to the layer.
 |          inputs: input tensor or list of inputs tensors to mark
 |              the updates as conditional on these inputs.
 |              If None is passed, the updates are assumed unconditional.
 |  
 |  add_weight(self, name, shape, dtype=None, initializer=None, regularizer=None, trainable=True, constraint=None)
 |      Adds a weight variable to the layer.
 |      
 |      # Arguments
 |          name: String, the name for the weight variable.
 |          shape: The shape tuple of the weight.
 |          dtype: The dtype of the weight.
 |          initializer: An Initializer instance (callable).
 |          regularizer: An optional Regularizer instance.
 |          trainable: A boolean, whether the weight should
 |              be trained via backprop or not (assuming
 |              that the layer itself is also trainable).
 |          constraint: An optional Constraint instance.
 |      
 |      # Returns
 |          The created weight variable.
 |  
 |  assert_input_compatibility(self, inputs)
 |      Checks compatibility between the layer and provided inputs.
 |      
 |      This checks that the tensor(s) `input`
 |      verify the input assumptions of the layer
 |      (if any). If not, exceptions are raised.
 |      
 |      # Arguments
 |          inputs: input tensor or list of input tensors.
 |      
 |      # Raises
 |          ValueError: in case of mismatch between
 |              the provided inputs and the expectations of the layer.
 |  
 |  compute_mask(self, inputs, mask=None)
 |      Computes an output mask tensor.
 |      
 |      # Arguments
 |          inputs: Tensor or list of tensors.
 |          mask: Tensor or list of tensors.
 |      
 |      # Returns
 |          None or a tensor (or list of tensors,
 |              one per output tensor of the layer).
 |  
 |  count_params(self)
 |      Count the total number of scalars composing the weights.
 |      
 |      # Returns
 |          An integer count.
 |      
 |      # Raises
 |          RuntimeError: if the layer isn't yet built
 |              (in which case its weights aren't yet defined).
 |  
 |  get_input_at(self, node_index)
 |      Retrieves the input tensor(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A tensor (or list of tensors if the layer has multiple inputs).
 |  
 |  get_input_mask_at(self, node_index)
 |      Retrieves the input mask tensor(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A mask tensor
 |          (or list of tensors if the layer has multiple inputs).
 |  
 |  get_input_shape_at(self, node_index)
 |      Retrieves the input shape(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A shape tuple
 |          (or list of shape tuples if the layer has multiple inputs).
 |  
 |  get_losses_for(self, inputs)
 |  
 |  get_output_at(self, node_index)
 |      Retrieves the output tensor(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A tensor (or list of tensors if the layer has multiple outputs).
 |  
 |  get_output_mask_at(self, node_index)
 |      Retrieves the output mask tensor(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A mask tensor
 |          (or list of tensors if the layer has multiple outputs).
 |  
 |  get_output_shape_at(self, node_index)
 |      Retrieves the output shape(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A shape tuple
 |          (or list of shape tuples if the layer has multiple outputs).
 |  
 |  get_updates_for(self, inputs)
 |  
 |  get_weights(self)
 |      Returns the current weights of the layer.
 |      
 |      # Returns
 |          Weights values as a list of numpy arrays.
 |  
 |  set_weights(self, weights)
 |      Sets the weights of the layer, from Numpy arrays.
 |      
 |      # Arguments
 |          weights: a list of Numpy arrays. The number
 |              of arrays and their shape must match
 |              number of the dimensions of the weights
 |              of the layer (i.e. it should match the
 |              output of `get_weights`).
 |      
 |      # Raises
 |          ValueError: If the provided weights list does not match the
 |              layer's specifications.
 |  
 |  ----------------------------------------------------------------------
 |  Class methods inherited from keras.engine.topology.Layer:
 |  
 |  from_config(config) from builtins.type
 |      Creates a layer from its config.
 |      
 |      This method is the reverse of `get_config`,
 |      capable of instantiating the same layer from the config
 |      dictionary. It does not handle layer connectivity
 |      (handled by Container), nor weights (handled by `set_weights`).
 |      
 |      # Arguments
 |          config: A Python dictionary, typically the
 |              output of get_config.
 |      
 |      # Returns
 |          A layer instance.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from keras.engine.topology.Layer:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)
 |  
 |  built
 |  
 |  input
 |      Retrieves the input tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      # Returns
 |          Input tensor or list of input tensors.
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  input_mask
 |      Retrieves the input mask tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      # Returns
 |          Input mask tensor (potentially None) or list of input
 |          mask tensors.
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  input_shape
 |      Retrieves the input shape tuple(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      # Returns
 |          Input shape tuple
 |          (or list of input shape tuples, one tuple per input tensor).
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  losses
 |  
 |  non_trainable_weights
 |  
 |  output
 |      Retrieves the output tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      # Returns
 |          Output tensor or list of output tensors.
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  output_mask
 |      Retrieves the output mask tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      # Returns
 |          Output mask tensor (potentially None) or list of output
 |          mask tensors.
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  output_shape
 |      Retrieves the output shape tuple(s) of a layer.
 |      
 |      Only applicable if the layer has one inbound node,
 |      or if all inbound nodes have the same output shape.
 |      
 |      # Returns
 |          Output shape tuple
 |          (or list of input shape tuples, one tuple per output tensor).
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  trainable_weights
 |  
 |  updates
 |  
 |  weights

In other words, a dense layer creates (units) nodes in a layer, and where every note from the previous layer (if this is the first layer you specify input shape, otherwise you don't specify input shape) is connected to every node. A big tangled web! Remember what we learned though: we need an activation layer for our neurons!


In [22]:
help(Activation)


Help on class Activation in module keras.layers.core:

class Activation(keras.engine.topology.Layer)
 |  Applies an activation function to an output.
 |  
 |  # Arguments
 |      activation: name of activation function to use
 |          (see: [activations](../activations.md)),
 |          or alternatively, a Theano or TensorFlow operation.
 |  
 |  # Input shape
 |      Arbitrary. Use the keyword argument `input_shape`
 |      (tuple of integers, does not include the samples axis)
 |      when using this layer as the first layer in a model.
 |  
 |  # Output shape
 |      Same shape as input.
 |  
 |  Method resolution order:
 |      Activation
 |      keras.engine.topology.Layer
 |      builtins.object
 |  
 |  Methods defined here:
 |  
 |  __init__(self, activation, **kwargs)
 |      Initialize self.  See help(type(self)) for accurate signature.
 |  
 |  call(self, inputs)
 |      This is where the layer's logic lives.
 |      
 |      # Arguments
 |          inputs: Input tensor, or list/tuple of input tensors.
 |          **kwargs: Additional keyword arguments.
 |      
 |      # Returns
 |          A tensor or list/tuple of tensors.
 |  
 |  get_config(self)
 |      Returns the config of the layer.
 |      
 |      A layer config is a Python dictionary (serializable)
 |      containing the configuration of a layer.
 |      The same layer can be reinstantiated later
 |      (without its trained weights) from this configuration.
 |      
 |      The config of a layer does not include connectivity
 |      information, nor the layer class name. These are handled
 |      by `Container` (one layer of abstraction above).
 |      
 |      # Returns
 |          Python dictionary.
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from keras.engine.topology.Layer:
 |  
 |  __call__(self, inputs, **kwargs)
 |      Wrapper around self.call(), for handling internal references.
 |      
 |      If a Keras tensor is passed:
 |          - We call self._add_inbound_node().
 |          - If necessary, we `build` the layer to match
 |              the _keras_shape of the input(s).
 |          - We update the _keras_shape of every input tensor with
 |              its new shape (obtained via self.compute_output_shape).
 |              This is done as part of _add_inbound_node().
 |          - We update the _keras_history of the output tensor(s)
 |              with the current layer.
 |              This is done as part of _add_inbound_node().
 |      
 |      # Arguments
 |          inputs: Can be a tensor or list/tuple of tensors.
 |          **kwargs: Additional keyword arguments to be passed to `call()`.
 |      
 |      # Returns
 |          Output of the layer's `call` method.
 |      
 |      # Raises
 |          ValueError: in case the layer is missing shape information
 |              for its `build` call.
 |  
 |  add_loss(self, losses, inputs=None)
 |      Add losses to the layer.
 |      
 |      The loss may potentially be conditional on some inputs tensors,
 |      for instance activity losses are conditional on the layer's inputs.
 |      
 |      # Arguments
 |          losses: loss tensor or list of loss tensors
 |              to add to the layer.
 |          inputs: input tensor or list of inputs tensors to mark
 |              the losses as conditional on these inputs.
 |              If None is passed, the loss is assumed unconditional
 |              (e.g. L2 weight regularization, which only depends
 |              on the layer's weights variables, not on any inputs tensors).
 |  
 |  add_update(self, updates, inputs=None)
 |      Add updates to the layer.
 |      
 |      The updates may potentially be conditional on some inputs tensors,
 |      for instance batch norm updates are conditional on the layer's inputs.
 |      
 |      # Arguments
 |          updates: update op or list of update ops
 |              to add to the layer.
 |          inputs: input tensor or list of inputs tensors to mark
 |              the updates as conditional on these inputs.
 |              If None is passed, the updates are assumed unconditional.
 |  
 |  add_weight(self, name, shape, dtype=None, initializer=None, regularizer=None, trainable=True, constraint=None)
 |      Adds a weight variable to the layer.
 |      
 |      # Arguments
 |          name: String, the name for the weight variable.
 |          shape: The shape tuple of the weight.
 |          dtype: The dtype of the weight.
 |          initializer: An Initializer instance (callable).
 |          regularizer: An optional Regularizer instance.
 |          trainable: A boolean, whether the weight should
 |              be trained via backprop or not (assuming
 |              that the layer itself is also trainable).
 |          constraint: An optional Constraint instance.
 |      
 |      # Returns
 |          The created weight variable.
 |  
 |  assert_input_compatibility(self, inputs)
 |      Checks compatibility between the layer and provided inputs.
 |      
 |      This checks that the tensor(s) `input`
 |      verify the input assumptions of the layer
 |      (if any). If not, exceptions are raised.
 |      
 |      # Arguments
 |          inputs: input tensor or list of input tensors.
 |      
 |      # Raises
 |          ValueError: in case of mismatch between
 |              the provided inputs and the expectations of the layer.
 |  
 |  build(self, input_shape)
 |      Creates the layer weights.
 |      
 |      Must be implemented on all layers that have weights.
 |      
 |      # Arguments
 |          input_shape: Keras tensor (future input to layer)
 |              or list/tuple of Keras tensors to reference
 |              for weight shape computations.
 |  
 |  compute_mask(self, inputs, mask=None)
 |      Computes an output mask tensor.
 |      
 |      # Arguments
 |          inputs: Tensor or list of tensors.
 |          mask: Tensor or list of tensors.
 |      
 |      # Returns
 |          None or a tensor (or list of tensors,
 |              one per output tensor of the layer).
 |  
 |  compute_output_shape(self, input_shape)
 |      Computes the output shape of the layer.
 |      
 |      Assumes that the layer will be built
 |      to match that input shape provided.
 |      
 |      # Arguments
 |          input_shape: Shape tuple (tuple of integers)
 |              or list of shape tuples (one per output tensor of the layer).
 |              Shape tuples can include None for free dimensions,
 |              instead of an integer.
 |      
 |      # Returns
 |          An input shape tuple.
 |  
 |  count_params(self)
 |      Count the total number of scalars composing the weights.
 |      
 |      # Returns
 |          An integer count.
 |      
 |      # Raises
 |          RuntimeError: if the layer isn't yet built
 |              (in which case its weights aren't yet defined).
 |  
 |  get_input_at(self, node_index)
 |      Retrieves the input tensor(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A tensor (or list of tensors if the layer has multiple inputs).
 |  
 |  get_input_mask_at(self, node_index)
 |      Retrieves the input mask tensor(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A mask tensor
 |          (or list of tensors if the layer has multiple inputs).
 |  
 |  get_input_shape_at(self, node_index)
 |      Retrieves the input shape(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A shape tuple
 |          (or list of shape tuples if the layer has multiple inputs).
 |  
 |  get_losses_for(self, inputs)
 |  
 |  get_output_at(self, node_index)
 |      Retrieves the output tensor(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A tensor (or list of tensors if the layer has multiple outputs).
 |  
 |  get_output_mask_at(self, node_index)
 |      Retrieves the output mask tensor(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A mask tensor
 |          (or list of tensors if the layer has multiple outputs).
 |  
 |  get_output_shape_at(self, node_index)
 |      Retrieves the output shape(s) of a layer at a given node.
 |      
 |      # Arguments
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      # Returns
 |          A shape tuple
 |          (or list of shape tuples if the layer has multiple outputs).
 |  
 |  get_updates_for(self, inputs)
 |  
 |  get_weights(self)
 |      Returns the current weights of the layer.
 |      
 |      # Returns
 |          Weights values as a list of numpy arrays.
 |  
 |  set_weights(self, weights)
 |      Sets the weights of the layer, from Numpy arrays.
 |      
 |      # Arguments
 |          weights: a list of Numpy arrays. The number
 |              of arrays and their shape must match
 |              number of the dimensions of the weights
 |              of the layer (i.e. it should match the
 |              output of `get_weights`).
 |      
 |      # Raises
 |          ValueError: If the provided weights list does not match the
 |              layer's specifications.
 |  
 |  ----------------------------------------------------------------------
 |  Class methods inherited from keras.engine.topology.Layer:
 |  
 |  from_config(config) from builtins.type
 |      Creates a layer from its config.
 |      
 |      This method is the reverse of `get_config`,
 |      capable of instantiating the same layer from the config
 |      dictionary. It does not handle layer connectivity
 |      (handled by Container), nor weights (handled by `set_weights`).
 |      
 |      # Arguments
 |          config: A Python dictionary, typically the
 |              output of get_config.
 |      
 |      # Returns
 |          A layer instance.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from keras.engine.topology.Layer:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)
 |  
 |  built
 |  
 |  input
 |      Retrieves the input tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      # Returns
 |          Input tensor or list of input tensors.
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  input_mask
 |      Retrieves the input mask tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      # Returns
 |          Input mask tensor (potentially None) or list of input
 |          mask tensors.
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  input_shape
 |      Retrieves the input shape tuple(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      # Returns
 |          Input shape tuple
 |          (or list of input shape tuples, one tuple per input tensor).
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  losses
 |  
 |  non_trainable_weights
 |  
 |  output
 |      Retrieves the output tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      # Returns
 |          Output tensor or list of output tensors.
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  output_mask
 |      Retrieves the output mask tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      # Returns
 |          Output mask tensor (potentially None) or list of output
 |          mask tensors.
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  output_shape
 |      Retrieves the output shape tuple(s) of a layer.
 |      
 |      Only applicable if the layer has one inbound node,
 |      or if all inbound nodes have the same output shape.
 |      
 |      # Returns
 |          Output shape tuple
 |          (or list of input shape tuples, one tuple per output tensor).
 |      
 |      # Raises
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  trainable_weights
 |  
 |  updates
 |  
 |  weights

Keras also thinks that you should read about the input_dim here: but we won't go into it much more.

Compiling

Thus far, we have tools and degrees of freedom to do a couple of cool things:

  1. Build arbitrarily deep stacks of layers, where each stack can be arbitrarily big (though just because we can doesn't mean we should.!)
  2. Change our activation functions

There's a lot more we can do later, but let's see what we do to actually run a model! When we configure the model training process, we get some more degrees of freedom:

  1. An optimizer. This could be the string identifier of an existing optimizer (such as rmsprop or adagrad), or an instance of the Optimizer class.
  2. A loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function (such as categorical_crossentropy or mse), or it can be an objective function.
  3. A list of metrics. For any classification problem you will want to set this to metrics=['accuracy']. A metric could be the string identifier of an existing metric or a custom metric function.

For example:


In [23]:
# For a multi-class classification problem
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])

# For a binary classification problem
model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy'])

# For a mean squared error regression problem
model.compile(optimizer='rmsprop',
              loss='mse')

# For custom metrics
import keras.backend as K

def mean_pred(y_true, y_pred):
    return K.mean(y_pred)

model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy', mean_pred])

The last command that we care about is fit. To see how that works, let's start from scratch. Let's start by generating dummy data with 1000 rows and 100 columns (features). Since we're making this data up, we don't expect our model to get very good.


In [24]:
data = np.random.random((1000, 100))
labels = np.random.randint(2, size=(1000, 1))

Define a simple model to fit the data. Note two important things. First, the activation function is now specified as an argument to Dense() rather than as a separate layer. That's a bit confusing, but allowed. Second, the input_dim is specified to match the number of columns of our input data, while the last layer has just 1 node with a sigmoid activation: Dense(1, activation='sigmoid') because it has to match the dimension of the labels.


In [28]:
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=100))
model.add(Dense(1, activation='sigmoid'))
SVG(model_to_dot(model).create(prog='dot', format='svg'))


Out[28]:
G 4872607392 dense_15_input: InputLayer 4872606944 dense_15: Dense 4872607392->4872606944 4872730888 dense_16: Dense 4872606944->4872730888

Compile the model. We'll use the rmsprop optimizer, and as a loss function binary_crossentropy works well (I am told). We'll set metrics=['accuracy'] so that the function will tell us how accurate our model is.


In [29]:
model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy'])

Now train the model. The epochs parameters specifies how many times you go through the data -- more on this later but if it's too small you don't train enough and if it's too big you overfit. Since this is a toy example we'll set it small.


In [30]:
model.fit(data, labels, epochs=10, batch_size=32)


Epoch 1/10
1000/1000 [==============================] - 0s - loss: 0.7209 - acc: 0.4730     
Epoch 2/10
1000/1000 [==============================] - 0s - loss: 0.7032 - acc: 0.5090     
Epoch 3/10
1000/1000 [==============================] - 0s - loss: 0.6920 - acc: 0.5340     
Epoch 4/10
1000/1000 [==============================] - 0s - loss: 0.6849 - acc: 0.5440     
Epoch 5/10
1000/1000 [==============================] - 0s - loss: 0.6783 - acc: 0.5800     
Epoch 6/10
1000/1000 [==============================] - 0s - loss: 0.6752 - acc: 0.5730     
Epoch 7/10
1000/1000 [==============================] - 0s - loss: 0.6714 - acc: 0.5730     
Epoch 8/10
1000/1000 [==============================] - 0s - loss: 0.6679 - acc: 0.5820     
Epoch 9/10
1000/1000 [==============================] - 0s - loss: 0.6639 - acc: 0.5990     
Epoch 10/10
1000/1000 [==============================] - 0s - loss: 0.6574 - acc: 0.6100     
Out[30]:
<keras.callbacks.History at 0x122383d68>

Cool! We have successfully fit our data to noise! Lok how the loss function has decreased (a bit) and the accuracy (our metric!) has increased. Let's see another toy example, this time with 10 classes


In [31]:
data = np.random.random((1000, 100))
labels = np.random.randint(10, size=(1000, 1))

We need to change the format of the labels a bit -- instead of being a vector with a number 1-10 in each row, we want it to be matrix with 10 columns of zeros, except in the corresponding column. Then when our neurons fire on the correct node, they get a 1 back and when they fire on the incorrect node they get a 0. Luckily keras has implemented this for us in one line.


In [33]:
from keras.utils import to_categorical
one_hot_labels = to_categorical(labels, num_classes=10)

In [35]:
labels[0:10]


Out[35]:
array([[1],
       [6],
       [5],
       [6],
       [6],
       [6],
       [2],
       [6],
       [5],
       [3]])

In [34]:
one_hot_labels[0:10,:]


Out[34]:
array([[ 0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.],
       [ 0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.]])

Now we'll define, compile, and run our model all at once. Note that we have the same optimizer as before but a different loss function.


In [36]:
model = Sequential()
model.add(Dense(37, activation='relu', input_dim=100))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
model.fit(data, one_hot_labels, epochs=50, batch_size=32)


Epoch 1/50
1000/1000 [==============================] - 0s - loss: 2.3577 - acc: 0.0820     
Epoch 2/50
1000/1000 [==============================] - 0s - loss: 2.3204 - acc: 0.1060     
Epoch 3/50
1000/1000 [==============================] - 0s - loss: 2.3065 - acc: 0.1230     
Epoch 4/50
1000/1000 [==============================] - 0s - loss: 2.2937 - acc: 0.1250     
Epoch 5/50
1000/1000 [==============================] - 0s - loss: 2.2871 - acc: 0.1390     
Epoch 6/50
1000/1000 [==============================] - 0s - loss: 2.2791 - acc: 0.1310     
Epoch 7/50
1000/1000 [==============================] - 0s - loss: 2.2712 - acc: 0.1510     
Epoch 8/50
1000/1000 [==============================] - 0s - loss: 2.2651 - acc: 0.1490     
Epoch 9/50
1000/1000 [==============================] - 0s - loss: 2.2594 - acc: 0.1600     
Epoch 10/50
1000/1000 [==============================] - 0s - loss: 2.2484 - acc: 0.1660     - ETA: 0s - loss: 2.2460 - acc: 0.176
Epoch 11/50
1000/1000 [==============================] - 0s - loss: 2.2417 - acc: 0.1720     
Epoch 12/50
1000/1000 [==============================] - 0s - loss: 2.2333 - acc: 0.1870     
Epoch 13/50
1000/1000 [==============================] - 0s - loss: 2.2250 - acc: 0.1760     
Epoch 14/50
1000/1000 [==============================] - 0s - loss: 2.2163 - acc: 0.1910     
Epoch 15/50
1000/1000 [==============================] - 0s - loss: 2.2067 - acc: 0.2050     
Epoch 16/50
1000/1000 [==============================] - 0s - loss: 2.1999 - acc: 0.1990     
Epoch 17/50
1000/1000 [==============================] - 0s - loss: 2.1906 - acc: 0.2220     
Epoch 18/50
1000/1000 [==============================] - 0s - loss: 2.1791 - acc: 0.2270     
Epoch 19/50
1000/1000 [==============================] - 0s - loss: 2.1713 - acc: 0.2220     
Epoch 20/50
1000/1000 [==============================] - 0s - loss: 2.1619 - acc: 0.2430     
Epoch 21/50
1000/1000 [==============================] - 0s - loss: 2.1520 - acc: 0.2560     
Epoch 22/50
1000/1000 [==============================] - 0s - loss: 2.1393 - acc: 0.2400     
Epoch 23/50
1000/1000 [==============================] - 0s - loss: 2.1324 - acc: 0.2490     
Epoch 24/50
1000/1000 [==============================] - 0s - loss: 2.1223 - acc: 0.2570     
Epoch 25/50
1000/1000 [==============================] - 0s - loss: 2.1083 - acc: 0.2570     
Epoch 26/50
1000/1000 [==============================] - 0s - loss: 2.1021 - acc: 0.2730     
Epoch 27/50
1000/1000 [==============================] - 0s - loss: 2.0909 - acc: 0.2680     
Epoch 28/50
1000/1000 [==============================] - 0s - loss: 2.0811 - acc: 0.2910     
Epoch 29/50
1000/1000 [==============================] - 0s - loss: 2.0714 - acc: 0.2800     
Epoch 30/50
1000/1000 [==============================] - 0s - loss: 2.0600 - acc: 0.2960     
Epoch 31/50
1000/1000 [==============================] - 0s - loss: 2.0509 - acc: 0.2920     
Epoch 32/50
1000/1000 [==============================] - 0s - loss: 2.0381 - acc: 0.3040     
Epoch 33/50
1000/1000 [==============================] - 0s - loss: 2.0278 - acc: 0.2980     
Epoch 34/50
1000/1000 [==============================] - 0s - loss: 2.0162 - acc: 0.3150     
Epoch 35/50
1000/1000 [==============================] - 0s - loss: 2.0111 - acc: 0.3090     
Epoch 36/50
1000/1000 [==============================] - 0s - loss: 2.0007 - acc: 0.3290     
Epoch 37/50
1000/1000 [==============================] - 0s - loss: 1.9840 - acc: 0.3310     
Epoch 38/50
1000/1000 [==============================] - 0s - loss: 1.9777 - acc: 0.3320     
Epoch 39/50
1000/1000 [==============================] - 0s - loss: 1.9665 - acc: 0.3340     
Epoch 40/50
1000/1000 [==============================] - 0s - loss: 1.9557 - acc: 0.3430     
Epoch 41/50
1000/1000 [==============================] - 0s - loss: 1.9436 - acc: 0.3520     
Epoch 42/50
1000/1000 [==============================] - 0s - loss: 1.9329 - acc: 0.3540     
Epoch 43/50
1000/1000 [==============================] - 0s - loss: 1.9247 - acc: 0.3600     
Epoch 44/50
1000/1000 [==============================] - 0s - loss: 1.9150 - acc: 0.3610     
Epoch 45/50
1000/1000 [==============================] - 0s - loss: 1.9024 - acc: 0.3760     
Epoch 46/50
1000/1000 [==============================] - 0s - loss: 1.8947 - acc: 0.3790     
Epoch 47/50
1000/1000 [==============================] - 0s - loss: 1.8833 - acc: 0.3780     
Epoch 48/50
1000/1000 [==============================] - 0s - loss: 1.8736 - acc: 0.3740     
Epoch 49/50
1000/1000 [==============================] - 0s - loss: 1.8614 - acc: 0.4010     
Epoch 50/50
1000/1000 [==============================] - 0s - loss: 1.8509 - acc: 0.3840     
Out[36]:
<keras.callbacks.History at 0x122b3c828>

We have now fit our model to get our accuracy up substantially -- but remember we are classifying noise! Let's go to another notebook to see some examples on real data! I recommend this cheat sheet for keras syntax!


In [42]:
model.weights[0]


Out[42]:
<tf.Variable 'dense_17/kernel:0' shape=(100, 37) dtype=float32_ref>