Mouse Control for Shooting Game using OpenCV and Python

Original article was published on Artificial Intelligence on Medium


Mouse Control for Shooting Game using OpenCV and Python

Photo by pixabay.com

Hi everyone! After almost 2 weeks not posted a new story, finally I’m back!! And I’m still want to share some experience, thought, or opinion about technology-related with the software engineering field!! Recently, I’m learnt about Artificial Intelligence especially on Computer Vision field. When we talk about Computer Vision, maybe OpenCV will first come up on our mind. Yes, because OpenCV is very popular library for Computer Vision, and it’s open source under BSD license! Visit the official website at https://opencv.org/ for learn more!

OpenCV’s application has a lot of areas include, for the example Facial recognition system, Gesture recognition, Motion tracking and etc. However, in this topic, we will just only try the Image Processing area for detect the color of an object. As the title for this story, we will try to play a simple shooting game, which is just need a movement and left click from our mouse, but we will controlled it using a color of object that recognized by OpenCV!! For the programming language, we will use Python since it very popular for AI.

I will summary what we need for this experiment.

  • Simple Shooting Game
  • 2 Objects
  • Python
  • OpenCV

Shooting Game

What’s the game we will play? It’s called Metro Cop. Demake of the classic Sega arcade game Virtua Cop, if you have been played it. The game available online in this link https://helpcomputer.itch.io/metro-cop. We can easily played it with our browser. And like I said, this game just need movement and click from our mouse.

Picture Metro Cop Game by indieretronews.com

Objects

In this case, I’m using 2 objects with orange color for the movement and green color for the clicking.

Picture 1 My Objects

Python

My Python version is 3.7.7 when running this experiment. Visit the official Python website at https://www.python.org/ for learn more how to install in your environment. Because I assume you already have Python installed on your machine.

OpenCV

And My OpenCV version is 4.2.0 when writing this story. I also assume you already installed OpenCV. Visit the official link that I mentioned previously for learn more how to install OpenCV.

Building Basic Application

Make sure you have pip installed on your system. Pip is package installer for Python. Usually, pip is already installed when we are using Python 2 >=2.7.9 or Python 3 >=3.4 downloaded from python.org. In this time, my pip version is 20.0.2.

Next, we need to install OpenCV package for Python. Follow this link https://pypi.org/project/opencv-python/ for guidance.

For verify our environment has been ready or not, write a simple application for reading our webcam and give output video in a window. Saved the file as main.py. Run the application using command python3 main.py

Picture 2 Basic OpenCV App

If you getting the result like picture above, yeah it’s mean we are ready to move to the next step! I will give a quick explain for the code.

First, we need to import open cv library using import cv2. Then access our camera for capture video using this code cap = cv2.VideoCapture(0) 0 means the default camera for our PC/laptop, if we have several camera, the number refer to USB port number.

Next we need to loop the captured image as video output using while True. After that, we should capture frame by frame using cap.read() function. By default, the frame unlike when we see ourself on the mirror. Personally, I prefer to flip it. We need frame = cv2.flip(frame, 1)to make this. 1(>0) for horizontally, 0 for vertically and <1 for both vertically and horizontally.

cv2.waitKey(1) will display a frame for 1 ms, after which display will be automatically closed. if key == 27 means ESC key code, when we pressed ESC, then the window will be closed.

Then, display the resulting frame using cv2.imshow("frame", frame). “frame” string will become the title of the window. When everything done, release the capture using the cap.release() and cv2.destroyAllWindows()

Finding Color of Objects

Maybe we have same perception for a color name. Let say, in this experiment I mentioning about orange and green color. But on your side, it could be “another orange and green” for me. The solution is we will track the color using OpenCV, for finding the best range for our “orange” or “green” color. Instead of using RGB (Red, Green, Blue) color model value, we will convert to HSV (Hue, Saturation, Value). Let’s build this simple track color application.

Save the file, let say as main.py and run using python3 main.py command. We will getting 3 frame: named as frame, mask and result and one mini window for Trackbars. For finding the color of an object, put the object inside frame and adjust the value for lower (L-H, L-S, L-V) and upper (U-H, U-S, U-V) in Trackbars.

Picture 3 Tracking Object Color

Yeah, we getting the orange color range! As you can see, the mask and result frame will just only show the orange color! Still confused? Let me explain about this, also for the code.

First, we create the Trackbars by adding this code.

def nothing(x):
pass
cap = cv2.VideoCapture(0)cv2.namedWindow('Trackbars')
cv2.createTrackbar('L - H', 'Trackbars', 0, 179, nothing)
cv2.createTrackbar('L - S', 'Trackbars', 0, 255, nothing)
cv2.createTrackbar('L - V', 'Trackbars', 0, 255, nothing)
cv2.createTrackbar('U - H', 'Trackbars', 179, 179, nothing)
cv2.createTrackbar('U - S', 'Trackbars', 255, 255, nothing)
cv2.createTrackbar('U - V', 'Trackbars', 255, 255, nothing)

In the beginning,cv2.nameWindows('windowName') will be created a new window named from the input value. Next, cv2.createTrackbar('trackbarName', ‘windowName’, minValue, maxValue, callBackOnChange) will create the trackbar following config: trackbar name, referring window name, min value, max value, and call back on change. Because Hue maximum value is 179, we set only for H. And the rest, 255. And because we don’t want do anything, we create the function to do nothing for the call back on change.

Next, we will convert the color to HSV using this codehsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) . How’s HSV frame look like? You can see the picture below, seems creepy 🙂

Picture 4 HSV Look Like

And the, we will getting the value from the Trackbars based on Trackbar and Window name.

l_h = cv2.getTrackbarPos('L - H', 'Trackbars')
l_s = cv2.getTrackbarPos('L - S', 'Trackbars')
l_v = cv2.getTrackbarPos('L - V', 'Trackbars')
u_h = cv2.getTrackbarPos('U - H', 'Trackbars')
u_s = cv2.getTrackbarPos('U - S', 'Trackbars')
u_v = cv2.getTrackbarPos('U - V', 'Trackbars')

Next, we will find out the object color from HSV frame inside range from lower to upper that we already adjust. The output from this frame, as you can see on Picture 3 is the second frame. We will getting the white object color, with black background. For easily working with array, we use numpy . And inRange code will be looking the object between lower and upper HSV value.

import numpy as np...lower = np.array([l_h, l_s, l_v])
upper = np.array([u_h, u_s, u_v])
mask = cv2.inRange(hsv, lower, upper)

Next, bitwise_and code will be create the third frame ouput on Picture 3. Based on the mask frame, we will get the same object, but with the original color.

result = cv2.bitwise_and(frame, frame, mask=mask)

Last, show all the frame with this code.

cv2.imshow('frame', frame)
cv2.imshow('mask', mask)
cv2.imshow('result', result)

Do those step above for find out the green color. Finally, here is my color range:

lower_orange = np.array([0, 109, 195])
upper_orange = np.array([17, 255, 255])
lower_green = np.array([37, 130, 95])
upper_green = np.array([48, 190, 173])
Picture 5 Tracking Another Object Color

Building The Application

Okay, we already got the range for the objects color, it’s time to building the application!

In the beginning, there is a function (image_resize) for resizing the window. I’am using this because I want to keep both game and our frame can fit in the screen together. Reference for resize the window but still keep the aspect ratio was from here: https://stackoverflow.com/questions/44650888/resize-an-image-without-distortion-opencv

Next, maybe we will jump into cv2.findCountours code. Like cv2.itwise_and , we will create contours based on mask frame. Contours can be explained simply as a curve joining all the continuous points (along the boundary), having same color or intensity. RETR_EXTERNAL means retrieves only the extreme outer contours. And CHAIN_APPROX_SIMPLE will be compresses horizontal, vertical, and diagonal segments and leaves only their end points.

Countours contain multiple coordinates, that’s why we need to loop contours using for c in contourscode. We also need to check the contour size area, make sure we have a reasonable size of detected area using cv2.contourArea(contour). And for drawing the contour, we use cv2.drawContours(img, contours, countourIdx, color, thickness) function. The result of drawing contour will be like the picture below.

Picture 6 Draw Countour Based on Mask Color

Nice, now, we will integrated the contour with the mouse control function. After did a research, I found several module for controlling the mouse. First, PyAutoGUI (https://pypi.org/project/PyAutoGUI/). But, in my opinion it’s slow. Can’t catch up with our gestures. Then, I looking for the other solution and found PyMouse (https://pypi.org/project/PyMouse/). It’s better! And I think I don’t have issue for current condition.

So, how we move and click the mouse? PyMouse already have this function. We need to import the module using from pymouse import PyMouse and initiate variable like this m = PyMouse()

Then, we be able to use m.move(x, y) and m.click(x, y, 1) for controlling the mouse. (x, y) means the coordinates for the pointer. And where we can getting the coordinates? From contour actually. If we looking at orange contour, we will see x, y, _, _ = cv2.boundingRect(c) which is will be return the position from our contour. And PyMouse also have function for returning the current position, so if we want to click something, we need coordinates, right? Then we should getting the current position, for avoid the miss position when do some click using x, y = m.position().

Yeah I think that’s all, the explanation for this simple experiment. Ah yeah, I’m adding a rule, for delaying the click. Because in my experience, If we didn’t implement this, when green color detected, it would be fast looping on click function, and our gesture can’t handle this speed. That’s why I’m putting some rule to check the last click for avoid multiple click.

Okay, how the position for playing this? You can follow my hand in this picture below. When we want to shoot, we will open our thumb.

Picture 7 Object Position

Testing

Cool, we already build the application. It’s time to do testing! We need to run the application using python3 main.py and open the url of the game using browser. Make sure the browser tab on active mode. And try to do movement for the orange color things as our pointer. Nice, it’s work! Next, try to open our thumb for detecting green color as click function. Yeah! It’s clicked! Now we can play the game without touching mouse / touchpad!!

Picture 8Testing the App

Improvement

I knew it was a basic application or just idea how to control mouse without touching directly. Maybe there are a lot of function that doesn’t work properly. For the example that I found are:

  • The pointer can’t reach the bottom of the screen. It would be become issue when the screen of the game bigger on your side.
  • The color maybe can be conflicted when our background / around us has the same color. Maybe we can put additional rules like the shape + color instead of only using color.

That’s why if you have some idea or suggestion for improve this, you can directly make a PR! Thank you! Here the github repository for this experiment.

Reference