Original article was published by Don Restarone on Artificial Intelligence on Medium
The Rails Side
The Rails application in my repository is stored under the
object-detectr/potashdirectory. Set up Active Storage in Rails as per this tutorial, and set up the following entity relationships:
rails g model Image analyzing:boolean analysis_completed:boolean assertion:stringrails g model ImageAnalysis image:references
After generating the models, install Ahoy. Once the tables are set up, your database schema should look like this:
app/models/image.rb and drop in the following:
This will tell Active Storage that this model has an attachable file named
captured_image and it has one
image_analysisassociation. We will use the
captured_image attribute to store the image given by the user and the
image_analysis association to store the analyzed image which, at the end, will be presented to the user.
app/models/image_analysis.rb looks identical to
image.rb. (This is a code-smell for DRY, and you can unify this into one model easily..)
With this in place, let’s generate a controller that will render a view and handle the file upload.
rails g controller Images new create
For convenience, we will change the route file to present the user with the page for image upload when they visit the root path.
Drop in the plumbing that handles the image upload, scaling, and Python script invocation. While you’re at it, feel free to call the code police because I just broke the single-responsibility principle to smithereens. I put it all in one file in anticipation of making explaining what’s going on a little easier — guilty as charged.
So let’s break it down. The
new action renders the view that allows the user to attach a file for analysis (more on this a little later) and the
before_action Hook ensures that the user must attach a file before moving forward.
The action happens in the
create method, where we grab the unique token for the current user session from
Ahoy with the call to
current_visit.visitor_token. Using that token as an identifier, we create the image, scale it, and grab the path to the scaled image.
Once the paths to the scaled image and the Python script are calculated, we make system calls (lines 15–16) to copy the scaled image to the directory that expects the input (on the Python side) and run the Python script that will call the TensorFlow model.
After the Python script is run, we compute the expected file name of the output and the path to the directory where it will be placed. If the file has been created without error (line 20), we create an
ImageAnalysis instance and attach the analyzed image to it.
Let’s take a look at the views rendered to the user. The
new action will present the user with a form and a submit button:
onChange listener is added to the form, which fires the function
readURL that sets the source attribute of the
img tag to the selected image. With this code in place, we are able to render a preview as soon as the user attaches an image to the form. Let’s take a look at the view that renders the output once they click Analyze!:
We check if the processed image exists (line 8) and if it does, we render it; otherwise, we present an error, asking the user to give us another chance.
All right, that’s all for Rails/Ruby. Let’s move on to the Python side!