Skip to content

API Docs - v1.0.4

Tensorflow

predict (Stream Processor)

Performs inferences (prediction) from an already built TensorFlow machine learning model. The types of models are unlimited (including image classifiers, deep learning models) as long as they satisfy the following conditions.
1. They are saved with the tag 'serve' in SavedModel format (See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md)
2. Model is initially trained and ready for inferences
3. Inference logic is written and saved in the model
4. signature_def is properly included in the metaGraphDef (a protocol buffer file which has information about the graph) and the key for prediction signature def is 'serving-default'

Also the prerequisites for inference are as follows.
1. User knows the names of the input and output nodes
2. Has a preprocessed data set of Java primitive types or their multidimensional arrays

Since each input is directly used to create a Tensor they should be of compatible shape and data type with the model.
The information related to input and output nodes can be retrieved from saved model signature def.signature_def can be read by using the saved_model_cli commands found at https://www.tensorflow.org/programmers_guide/saved_model
signature_def can be read in Python as follows
with tf.Session() as sess:
  md = tf.saved_model.loader.load(sess, ['serve'], export_dir)
  sig = md.signature_def[tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
  print(sig)

Or you can read signature def from Java as follows,
final String DEFAULT_SERVING_SIGNATURE_DEF_KEY = "serving_default";

final SignatureDef sig =
      MetaGraphDef.parseFrom(model.metaGraphDef())
          .getSignatureDefOrThrow(DEFAULT_SERVING_SIGNATURE_DEF_KEY);

You will have to import the following in Java.
import org.tensorflow.framework.MetaGraphDef;
import org.tensorflow.framework.SignatureDef;

Syntax

tensorFlow:predict(<STRING> absolute.path.to.model, <STRING> input.node.names, <STRING> output.node.names, <INT|STRING|DOUBLE|LONG|FLOAT|BOOL|OBJECT> attributes)

QUERY PARAMETERS

Name Description Default Value Possible Data Types Optional Dynamic
absolute.path.to.model This is the absolute path to the model folder in the local machine. STRING No No
input.node.names This is a variable length parameter. The names of the input nodes as comma separated strings. STRING No No
output.node.names This is a variable length parameter. The names of the output nodes as comma separated strings. STRING No No
attributes This is a variable length parameter. These are the attributes coming with events. Note that arrays should be cast to objects and sent. INT
STRING
DOUBLE
LONG
FLOAT
BOOL
OBJECT
No No
Extra Return Attributes
Name Description Possible Types
outputs This is a variable length return attribute. The output tensors from the inference will be flattened out and sent in their primitive values. User is expected to know the shape of the output tensors if he/she wishes to reconstruct it. The shape and data type information can be retrieved from TensorFlow saved model signature_def. See the description of this extension for instructions on how to read signature_def INT
STRING
DOUBLE
LONG
FLOAT
BOOL

Examples EXAMPLE 1

define stream InputStream (x Object, y Object);
@info(name = 'query1') 
from InputStream#tensorFlow:predict('home/MNIST', 'inputPoint', 'dropout', 'outputPoint', x, y) 
select outputPoint0, outputPoint1, outputPoint2, outputPoint3, outputPoint4, outputPoint5, outputPoint6, outputPoint7, outputPoint8, outputPoint9 
insert into OutputStream;

This is a query to get inferences from a MNIST model. This model takes in 2 inputs. One being the image as float array and other is keep probability array and sends out a Tensor with 10 elements. Our stream processor flattens the tensor and sends 10 floats each representing the probability of image being 0,1,...,9