I debated writing this post because to be quite honest, it feels a little “vulnerable” talking about bugs that I should have caught, or exposing things I didn’t think about. However, if other new devs are viewing this, hopefully it encourages them to know that these things happen, and this is just how it goes sometimes (at least in my experience). That said, there’s nothing more motivating to patch bugs than seeing them broadcast by independent parties on Youtube. I sent keys out to some folks to help get coverage and feedback. One of them was https://voiagamer.itch.io/ who posted this video today:
I’m super grateful for his post (he has nearly 7,000 followers), but the very public exposure of some bugs made me cringe a little inside. There was one GLARING bug that I was aware of but as a developer you sometimes become “numb” to because in your mind it’s minor.. That is the score increments when you die but your last life there’s a discrepancy between your printed score and your High Score. It turns out a race condition of the enemy death vs the player being cleaned up b/c it was his last life was the root cause. That was an easy fix.
The second was new to me and I ALMOST missed it watching his video. In the middle of the video he starts a new game and dies once, then almost clears the FIRST wave and dies with 2 enemies left on the screen. Then you see “Wave 4” pop up as he starts his new life.
Wait a minute! How did he jump from Wave 1 -> Wave 4? At first I thought just video edited to bypass some gameplay but I rewatched it and sure enough the score stayed the same, there was no cut, that was in fact a bug.
Thanks to it being recorded I was able to replicate pretty easily. What I realized is that I had the logic for broadcasting events in an if statement in the main update loop of the Spawn Controller (those orange things that spit out enemies). There’s a few boolean logic conditions that come together to fire a “wave update” signal to the game and it seems when there are 3 or fewer enemies left in the wave and you die – that get’s fired for a few frames before the slots / callbacks respond to them! I realized I didn’t need that check every frame and moved it to the logic that only gets fired when an enemy dies. So far, I can’t reproduce the issue now!
So that’s how my day began – now onto creating bosses 🙂
This week I’ve worked on setting up the Steam store for Final Storm and the Itch.io Final Storm page is setup to take pre-orders. I have a release date set for November 2nd, 2017. This is to finish polishing the game, knocking some bugs off of it, and allowing all of the Steam vetting process to occur. I’m really excited at this point because it is so close to release!
So you’ve built an awesome machine learning model in Keras and now you want to run it natively thru Tensorflow. This tutorial will show you how. All of the code in this tutorial can be cloned / downloaded from https://github.com/bitbionic/keras-to-tensorflow.git . You may want to clone it to follow along.
Keras is a wonderful high level framework for building machine learning models. It is able to utilize multiple backends such as Tensorflow or Theano to do so. When a Keras model is saved via the .save method, the canonical save method serializes to an HDF5 format. Tensorflow works with Protocol Buffers, and therefore loads and saves .pb files. This tutorial demonstrates how to:
build a SIMPLE Convolutional Neural Network in Keras for image classification
save the Keras model as an HDF5 model
verify the Keras model
convert the HDF5 model to a Protocol Buffer
build a Tensorflow C++ shared library
utilize the .pb in a pure Tensorflow app
We will utilize Tensorflow’s own example code for this
I am conducting this tutorial on Linux Mint 18.1, using GPU accelerated Tensorflow version 1.1.0 and Keras version 2.0.4. I have run this on Tensorflow v.1.3.0 as well.
A NOTE ABOUT WINDOWS: Everything here SHOULD work on Windows as well until we reach C++. Building Tensorflow on Windows is a bit different (and to this point a bit more challenging) and I haven’t fully vetted the C++ portion of this tutorial on Windows yet. I will update this post upon vetting Windows.
Assumptions
You are familiar with Python (and C++ if you’re interested in the C++ portion of this tutorial)
You are familiar with Keras and Tensorflow and already have your dev environment setup
Example code is utilizing Python 3.5, if you are using 2.7 you may have to make modifications
Get a dataset
I’m assuming that if you’re interested in this topic you probably already have some image classification data. You may use that or follow along with this tutorial where we use the flowers data from the Tensorflow examples. It’s about 218 MB and you can download it from http://download.tensorflow.org/example_images/flower_photos.tgz
After extracting the data you should see a folder structure similar to the image shown here. There are 5 categories and the data is pre-sorted into test and train.
Train your model
I will use a VERY simple CNN for this example, however the techniques to port the models work equally well with the built-in Keras models such as Inception and ResNet. I have no illusions that this model will win any awards, but it will serve our purpose.
There are a few things to note from the code listed below:
Label your input and output layer(s) – this will make it easier to debug when the model is converted.
I’m relying on the Model Checkpoint to save my .h5 files – you could also just call classifier.save after the training is complete.
Make note of the shape parameter you utilize, we will need that when we run the model later.
k2tf_trainer.py
Python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
'''
This script builds and trains a simple Convolutional Nerual Network (CNN)
against a supplied data set. It is used in a tutorial demonstrating
how to build Keras models and run them in native C++ Tensorflow applications.
MIT License
Copyright (c) 2017 bitbionic
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
parser.add_argument('--epochs','-e',dest='epochs',default=30,type=int,required=False,help='number of epochs to run (default:30)')
parser.add_argument('--shape','-s',dest='shape',default=128,type=int,required=False,help='The shape of the image, single dimension will be applied to height and width (default:128)')
I down-sampled the imagery significantly and ran the model more than I needed to, but here was the command I ran (NOTE: I ran this on some old hardware using GPU acceleration on a NVIDIA GTX-660 – you can probably increase the batch size significantly assuming you have better hardware):
A few runs of this yielded val_acc in the 83-86% range, and while it’s no Inception, it’s good enough for this exercise.
Test your model
So now let’s just do a quick gut-check on our model – here’s a small script to load your model, image, shape and indices (especially if you didn’t use the flowers set):
k2tf_trainer.py
Python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
'''
This script evaluates a simple Convolutional Nerual Network (CNN)
against a supplied data set. It is used in a tutorial demonstrating
how to build Keras models and run them in native C++ Tensorflow applications.
MIT License
Copyright (c) 2017 bitbionic
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
'''
importargparse
importnumpy asnp
fromkeras.preprocessing importimage
fromkeras.models importload_model
definvertKVPairs(someDictionary):
'''
Inverts the key/value pairs of the supplied dictionary.
Args:
someDictionary (dict): The dictionary for which you would like the inversion
Returns:
Dictionary - the inverse key-value pairing of someDictionary
'''
ret={}
fork,vinsomeDictionary.items():
ret[v]=k
returnret
if__name__=='__main__':
parser=argparse.ArgumentParser()
parser.add_argument('--model','-m',dest='model',required=True,help='The HDF5 Keras model you wish to run')
parser.add_argument('--image','-i',dest='image',required=True,help='The image you wish to test')
parser.add_argument('--shape','-s',type=int,dest='shape',required=True,help='The shape to resize the image for the model')
parser.add_argument('--labels','-l',dest='labels',required=False,help='The indices.txt file containing the class_indices of the Keras training set')
args=parser.parse_args()
model=load_model(args.model)
# These indices are saved on the output of our trainer
I adapted the notebook from the link above to a script we can run from the command line. The code is almost identical except for the argument parsing. This code does the following:
Loads your .h5 file
Replaces your output tensor(s) with a named Identity Tensor – this can be helpful if you are using a model you didn’t build and don’t know all of the output names (of course you could go digging, but this avoids that).
Saves an ASCII representation of the graph definition. I use this to verify my input and output names for Tensorflow. This can be useful in debugging.
Replaces all variables within the graph to constants.
Writes the resulting graph to the output name you specify in the script.
With that said here’s the code:
An error has occurred. Please try again later.
When you run the code, it’s important that you make the prefix name unique to the graph. If you didn’t build the graph, using something like “output” or some other generic name has the potential of colliding with a node of the same name within the graph. I recommend making the prefix name uniquely identifiable, and for that reason this script defaults the prefix to “k2tfout” though you can override that with whatever you prefer.
And now let’s run this little guy on our trained model.
As you can see, two files were written out. An ASCII and .pb file. Let’s look at the graph structure, notice the input node name “firstConv2D_input” and the output name “k2tfout_0”, we will use those in the next section:
I copied both of those files into the git repo for this tutorial. Now let’s test them out.
Running your Tensorflow model with Python
Running the Python script is fairly straight forward. Remember, we need to supply the following arguments:
the output_graph.pb we generated above
the labels file – this is supplied with the dataset but you could generate a similar labels.txt from the indices.txt file we produced in our Keras model training
input width and height. Remember I trained with 80×80 so I must adjust for that here
The input layer name – I find this in the generated ASCII file from the conversion we did above. In this case it is “firstConv2D_input” – Remember our k2tf_trainer.py named the first layer “firstConv2D”.
The output layer name – We created this with prefix and can verify it in our ASCII file. We went with the script default which was “k2tfout_0”
So now Tensorflow is running our model in Python – but how do we get to C++?
Running your Tensorflow model with C++
If you are still reading then I’m assuming you need to figure out how to run Tensorflow in a production environment on C++. This is where I landed, and I had to bounce between fragments of tutorials to get things to work. Hopefully the information here will give you a consolidated view of how to accomplish this.
For my project, I wanted to have a Tensorflow shared library that I could link and deploy. That’s what we’ll build in this project and build the label_image example with it.
To run our models in C++ we first need to obtain the Tensorflow source tree. The instructions are here, but we’ll walk thru them below.
Now that we have the source code, we need the tools to build it. On Linux or Mac Tensorflow uses Bazel. Windows uses CMake (I tried using Bazel on Windows but was not able to get it to work). Again installation instructions for Linux are here, and Mac here, but we’ll walk thru the Linux instructions below. Of course there may be some other dependencies but I’m assuming if you’re taking on building Tensorflow, this isn’t your first rodeo.
Install Python dependencies (I’m using 3.x, for 2.x omit the ‘3’)
Install JDK 8, you can use either Oracle or OpenJDK. I’m using openjdk-8 on my system (in fact I think I already had it installed). If you don’t, simply type:
Install JDK 8
1
sudo apt-get install openjdk-8-jdk
NOTE: I have not tested building with CUDA – this is just the documentation that I’ve read. For deployment I didn’t want to build with CUDA, however if you do then you of course need the CUDA SDK and the CUDNN code from NVIDIA. You’ll also need to grab libcupti-dev.
At this point you should be able to run bazel help and get feedback:
Test Bazel Install
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
(tensorflow)$~/Development/keras-to-tensorflow/tensorflow$bazel help
.......................................
[bazel release0.5.3]
Usage:bazel<command><options>...
Available commands:
analyze-profile Analyzes build profile data.
build Builds the specified targets.
canonicalize-flags Canonicalizesalist of bazel options.
clean Removes output files andoptionally stops the server.
coverage Generates code coverage report forspecified test targets.
dump Dumps the internal state of the bazel server process.
fetch Fetches external repositories that are prerequisites tothe targets.
help Prints help forcommands,orthe index.
info Displays runtime info about the bazel server.
license Prints the license of thissoftware.
mobile-install Installs targets tomobile devices.
query Executesadependency graph query.
run Runs the specified target.
shutdown Stops the bazel server.
test Builds andruns the specified test targets.
version Prints version information forbazel.
Getting more help:
bazel help<command>
Prints help andoptions for<command>.
bazel help startup_options
Options forthe JVM hosting bazel.
bazel help target-syntax
Explains the syntax forspecifying targets.
bazel help info-keys
Displaysalist of keys used by the info command.
Now that we have everything installed, we can configure and build. Make sure you’re in the top-level tensorflow directory. I went with all the default configuration options. When you do this, the configuration tool will download a bunch of dependencies – this make take a minute or two.
Alright, we’re configured, now it’s time to build. DISCLAIMER: I AM NOT a Bazel guru. I found these settings via Google-Fu and digging around in the configuration files. I could not find a way to get Bazel to dump the available targets. Most tutorials I saw are about building the pip target – however, I wanted to build a .so. I started looking thru the BUILD files to find targets and found these in tensorflow/tensorflow/BUILD
Tensorflow Targets
Python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cc_binary(
name="libtensorflow.so",
linkshared=1,
deps=[
"//tensorflow/c:c_api",
"//tensorflow/core:tensorflow",
],
)
cc_binary(
name="libtensorflow_cc.so",
linkshared=1,
deps=[
"//tensorflow/c:c_api",
"//tensorflow/cc:cc_ops",
"//tensorflow/cc:client_session",
"//tensorflow/cc:scope",
"//tensorflow/core:tensorflow",
],
)
So with that in mind here’s the command for doing this (you may want to alter jobs based on your number of cores and RAM – also you can remove avx, mfpmath and msse4.2 optimizations if you wish):
Go get some coffee, breakfast, lunch or watch a show. This will grind for a while, but you should end up with bazel-bin/tensorflow/libtensorflow_cc.so.
Let’s run it!
In the tutorial git repo, I’m including a qmake .pro file that links the .so and all of the required header locations. I’m including it for reference – you DO NOT need qmake to build this. In fact, I’m including the g++ commands to build. You may have to adjust for your environment. Assuming you’re building main.cpp from the root of the repo, and the tensorflow build we just created was cloned and built in the same directory, all paths should be relative and work out of the box.
So there we have it. I did notice that the percentages aren’t exactly the same as when I ran the Keras models directly. I’m not sure if this is a difference in compiler settings or if Keras is overriding some calculations. But this gets you well on your way to running a Keras model in C++.
I recently needed to figure out how to deploy a Keras application on Windows 10 without requiring someone to know how to install Python and all of the dependencies for the application. This video shares what I learned in the process, showing how to deploy a Python app with many third party dependencies into a single deployment package complete with executable.