How do I convert a numpy Array to a data type that tensorflow can classify?

How do I convert a numpy Array to a data type that tensorflow can classify?



I am writing a Python program to detect the state of a chess board and I am using a sliding window to detect the position of each piece. My main program detects the chessboard within an image and passes its cropped picture to the my_sliding_window method. This is supposed to use Tensorflow to detect a piece in the sliding window. From this tutorial I saw that pictures are read like this:


image_data = tf.gfile.FastGFile('picture.jpg', 'rb').read()



But I don't want to read it from file as I already have the picture in a numpy array. How do make my numpy array in such a way that it can be classified by Tensorflow?



Thank you.



Code:


import tensorflow as tf, sys
import cv2

image_path = sys.argv[1]


img = cv2.imread('picture.jpg')
image_data = tf.convert_to_tensor(img)
print type(image_data) # this returns <class 'tensorflow.python.framework.ops.Tensor'>

# This is what is used in the tutorial I mentioned above
image_data2 = tf.gfile.FastGFile(image_path, 'rb').read()
print type(image_data2) # this returns <type 'str'>


# Loads label file, strips off carriage return
label_lines = [line.rstrip() for line
in tf.gfile.GFile("retrained_labels.txt")]

# Unpersists graph from file
with tf.gfile.FastGFile("retrained_graph.pb", 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')

with tf.Session() as sess:
# Feed the image_data as input to the graph and get first prediction
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')

predictions = sess.run(softmax_tensor,
'DecodeJpeg/contents:0': image_data)

# Sort to show labels of first prediction in order of confidence
top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]

for node_id in top_k:
human_string = label_lines[node_id]
score = predictions[0][node_id]
print('%s (score = %.5f)' % (human_string, score))




2 Answers
2



You could use tf.convert_to_tensor() to convert your numpy array into a TensorFlow tensor:


tf.convert_to_tensor()



This function converts Python objects of various types to Tensor objects. It accepts Tensor objects, numpy arrays, Python lists, and Python scalars.



Ok, so what you're trying to do is to feed the numpy array image_data, with dimensions [123, 82] to the placeholder DecodeJpeg/contents:0. However that placeholder was defined with shape=() meaning it only accepts 0D tensors as input (see tensor shapes), hence throwing you an error.


image_data


[123, 82]


DecodeJpeg/contents:0


shape=()



What the original code does is to read an image as a dimensionless string with:


image_data = tf.gfile.FastGFile(image_path, 'rb').read()



which is then fed to the DecodeJpeg/contents:0 placeholder in:


DecodeJpeg/contents:0


predictions = sess.run(softmax_tensor, 'DecodeJpeg/contents:0': image_data)



The easiest way to proceed and try to run your images through the pretrained graph would be to use the same tf.gfile.FastGFile() call for loading the images.


tf.gfile.FastGFile()





Using that function on my numpy array I get this error: TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, or numpy ndarrays. I'll include my code in the question.
– BourbonCreams
Feb 24 '17 at 15:03






That's because you cannot pass a Tensor object as feed argument to sess.run(). Try passing the numpy array directly .
– jabalazs
Feb 25 '17 at 0:58


sess.run()





That's what I tried in the first place but I get this error: ValueError: Cannot feed value of shape (123, 82) for Tensor u'DecodeJpeg/contents:0', which has shape '()' . I get it from this line: predictions = sess.run(softmax_tensor, 'DecodeJpeg/contents:0': image_data)
– BourbonCreams
Feb 25 '17 at 16:13






So I should (and in fact it is what I did while trying to understand this) write those images to file and read them back using tf.gfile.FastGFile(image_path, 'rb').read() ? It works but it is not very efficient. There must be a way to use the numpy arrays directly.
– BourbonCreams
Feb 26 '17 at 10:50





In your code, according to the line img = cv2.imread('picture.jpg') it seems like you already have the image file. My suggestion is that you do image_data = tf.gfile.FastGFile('picture.jpg', 'rb').read(), and then feed that to the placeholder. Also, there is indeed a way to use numpy arrays as input to your placeholder but that would require you to either create the graph again, or attempt to modify the pretrained one (I'm not sure whether that's possible though).
– jabalazs
Feb 26 '17 at 11:51


img = cv2.imread('picture.jpg')


image_data = tf.gfile.FastGFile('picture.jpg', 'rb').read()



I solved the problem by using this one-liner:


image_data = cv2.imencode('.jpg', cv_image)[1].tostring()






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

𛂒𛀶,𛀽𛀑𛂀𛃧𛂓𛀙𛃆𛃑𛃷𛂟𛁡𛀢𛀟𛁤𛂽𛁕𛁪𛂟𛂯,𛁞𛂧𛀴𛁄𛁠𛁼𛂿𛀤 𛂘,𛁺𛂾𛃭𛃭𛃵𛀺,𛂣𛃍𛂖𛃶 𛀸𛃀𛂖𛁶𛁏𛁚 𛂢𛂞 𛁰𛂆𛀔,𛁸𛀽𛁓𛃋𛂇𛃧𛀧𛃣𛂐𛃇,𛂂𛃻𛃲𛁬𛃞𛀧𛃃𛀅 𛂭𛁠𛁡𛃇𛀷𛃓𛁥,𛁙𛁘𛁞𛃸𛁸𛃣𛁜,𛂛,𛃿,𛁯𛂘𛂌𛃛𛁱𛃌𛂈𛂇 𛁊𛃲,𛀕𛃴𛀜 𛀶𛂆𛀶𛃟𛂉𛀣,𛂐𛁞𛁾 𛁷𛂑𛁳𛂯𛀬𛃅,𛃶𛁼

Edmonton

Crossroads (UK TV series)