CENTERSTAGE TFOD-prop-far Autonomous Program

This is a tutorial on creating a TensorFlow FIRST Tech Challenge (FTC) autonomous program for the CENTERSTAGE game.

Visit the FTC docs page on Creating Op Modes in blocks if you need help getting started and to create your first driver controlled program.

We will use TensorFlow to detect a Team Prop on the randomized spike mark from the starting position and also park backstage. We’ll make use of motor encoders to make our robot moves more accurate. Because we don’t stop to check each spike mark, this program is a little faster which is good when starting at the front of the field where you have a long way to drive backstage.

The autonomous period include 20 bonus points that can be scored if you are able to use a Team Prop and correctly place the purple and yellow pixels. Assuming you already have a program that recognizes the pixel on the spike mark, now we can change that program to detect the team prop instead and earn bonus points.

Prerequisites/Assumptions

This tutorial assumes:

You can probably follow along even if you’re new to Blocks, however this tutorial doesn’t explain how to program in Blocks.

Plan

The plan is to started aligned with the rear edge of tile A4 with rear of the robot flat against the field wall. The program will use TensorFlow from the starting position to decide which spike mark has the Team Prop. It will then drive to that mark, place the purple pixel and then drive backstage to park.

The basic plan:

  • Use TensorFlow to check which spike mark has the team prop;
  • If there is a team prop on the left mark, drive forward and drop off the purple pixel on the left mark and turn and drive to the backstage corner tile A6;
  • If there is a team prop on the the center spike mark drive forward and drop off the purple pixel and then turn and park on tile A6; We’ll also try the center spike mark if the team prop is not found;
  • If the team prop was found on the right mark we will drive towards the right mark, drop off the purple pixel on the right mark and then turn and park on tile A6.

CENTERSTAGE TensorFlow for Team Props

Obviously you need Team Props to get the bonus points. See the creating team props page for information on creating you own team props. For this program we are using Red and Blue Duplo props.

Team Props made from LEGO Duplo

You should have taken videos of the team props in various sizes, orientations, background and lighting and then trained a custom TensorFlow model. In this case the videos were taken with the robot in the starting location and set up so it could see all three spike marks. The videos were taken at 480p, a resolution of 640×480.

The above images are video frames showing the Team Props being labeled. Note how the Logitech C270 is only able to see the center black mark of the left and right spike marks. However, we can still train TensorFlow to recognized the team prop even if it is not fully visible.

The resulting TensorFlow model was downloaded and saved a file with name: redBlueDuploFar.tflite. Because we’re using Blocks, the file was uploaded to the Robot Controller with that name.

TFOD-prop-far

Turn your robot on, connect to the Blocks programming environment. Then copy your TFOD-prop or TFOD-pixel or TFOD-prop program. My last version of that program was called TFOD-pixel3, so I copied that program to create TFOD-prop3.

If you copied TFOD-prop, just change the modelFileName to match your model file. Otherwise, find the initTFOD function and change it as follows:

  • disable or delete the easyCreateWithDefaults block
  • Add the indicated MyTfodProcessorBuilder blocks, specify your model file name in the setModelFileName block
  • create a list with the labels that in in the model file and use in the setModelLabels block
  • add a myTfodProcessorBuilder.build block which should include a set.MyTfodProcessor block.
  • optional: adjust the setMinResultConfidence value. It defaults to 0.75, you might need to set it lower to detect your props.

Unlike TFOD-pixel or TFOD-prop we don’t need to move as our first step, instead step 1 is to detect the Team Prop from the starting position. We’ll use the same detectProp function as TFOD-prop. In addition to detecting IF the prop was found, we also want to know WHERE in the image it was found.

We call detectProp, just like we used to call detectPixel or detectProp, but if we do detect a team prop, then we look at WHERE the team prop was found. The myTfodRecognitions variable was set to a list of all TensorFlow detected objects. We’ll assume for now that there’s only one detection, so we’ll use a List block toget the first item in the myTfodRecognitions list. We then check the Left position value for that recognition. That is the Left coordinate of the bounding box for the object.

Remember that we trained our model to detect the team prop when all three spike marks are visible. So if the prop is on the left mark, the Left position value will be low (likely zero because it will be on the edge). In this case use check for a Left value < 10. We’re using 640×480 resolution, so after we check for Left < 10, we can check Left < 400 and if so we know the prop is on the center spike mark. And if Left is >= 400 the prop must be on the right spike mark. If we time out without a TensorFlow detection, we will assume the prop is on the center spike mark. We provide telemetry about whether the prop was detected and which location we are going to use.

This program is only partially completed. It has driving logic so that if the Location = center, then the robot will drive out and drop off the purple pixel and then park backstage. The driving logic for the right and left spike marks was not done, but comments in the code indicate where that would happen.

Here is the completed program:

TFOD-prop-far

Here’s a video of the Pushbot running the program with a Team Prop on the center spike mark. It detects the team prop and uses the Left position value to determine the props is on the center spike mark. The program then drives to that mark, places the purple pixel, and parks backstage.

Next Steps

This program would need driving code to place the purple pixel on the other spike marks. Then you could copy the program to make versions that worked on the other three starting positions.

If you had a way to deploy the yellow pixel onto the backdrop, you probably want to use April Tag driving to position in front of the backdrop area that corresponds to the randomized spike mark to get bonus points. From the rear side of the field you probably could use encoder based movement. But from the front of the field it might be easier to use the April Tags on the backdrops.

Getting Help

It is often possible to use Google (or other search engine) to get help or solve problems. There are lots of resources online. If you’re still stuck you can ask for help here.