CENTERSTAGE TFOD-prop-April-Tag Autonomous Program

This is a tutorial on creating a TensorFlow FIRST Tech Challenge (FTC) autonomous program for the CENTERSTAGE game.

Visit the FTC docs page on Creating Op Modes in blocks if you need help getting started and to create your first driver controlled program.

We will use TensorFlow to detect a Team Prop on the randomized spike mark from the starting position. We’ll make use of motor encoders to make our robot moves more accurate. We’ll then line up on the backdrop area using April Tags.

The autonomous period include 20 bonus points that can be scored if you are able to use a Team Prop and correctly place the purple and yellow pixels. We can make use of TensorFlow for the Team Prop and April Tags to line up on the backdrop area.

Prerequisites/Assumptions

This tutorial assumes:

You can probably follow along even if you’re new to Blocks, however this tutorial doesn’t explain how to program in Blocks.

Plan

The autonomous plan is to started aligned with the rear edge of tile A4 with rear of the robot flat against the field wall. The program will use TensorFlow from the starting position to decide which spike mark has the Team Prop. It will then drive to that mark, place the purple pixel. The robot will turn to face the backdrop. It will then turn on April Tag processing to detect the backdrop April Tag that corresponds to the TensorFlow detection. The robot will drive up to that backdrop April Tag and stop. This robot has no way to place a yellow pixel on the backdrop so we just park there.

The basic plan:

  • Use TensorFlow to check which spike mark has the team prop;
  • If there is a team prop on the left mark, drive forward and drop off the purple pixel on the left mark and turn to face the backdrop; [Not implemented yet]
  • If there is a team prop on the the center spike mark drive forward and drop off the purple pixel and then turn to face the backdrop;
  • If the team prop was found on the right mark we will drive towards the right mark, drop off the purple pixel on the right mark and turn to face the backdrop; [Not implemented yet]
  • For each spike mark location, there is a corresponding April Tag on the blue backdrop. We will call a function that detects that April Tag and drives towards that April Tag.
  • April Tag driving can only get so close, so once we are close, we switch to encoder based driving to drive the remaining distance and touch the backdrop with the webcam lined up on the April Tag and backdrop area that corresponds to the spike mark location where the Team Prop was detected.
  • This is where the robot would place the yellow pixel on the backdrop, the pushbot is not able to do this.
  • Then the robot should park backstage in the corner so your alliance partner could also place a pixel if they are able. [Not implemented yet]

This program is not complete, the driving logic for the Left and Right spike marks was not implemented, nor was parking backstage. This robot doesn’t have a way to place the yellow pixel either so we can’t do that.

However, this program does show how you can include both TensorFlow and April Tag processing in a Vision Portal. You can then switch from TensorFlow to using April Tags.

We’re using TensorFlow from the starting position and trying to recognize the Duplo Team Props. As you can see the Team Props will be detected at the outside edges of the image, or near the center.

TFOD-prop-April-Tag

This program is a combination of TFOD-prop-far and RobotAutoDriveToAprilTagTank_Blocks. We won’t go through the program in detail, but I will highlight the interesting parts of the program.

You can find the Blocks program in the Pushbot GitHub respository. Right click on the TFOD-prop-April-Tag.blk link and save it. Then upload to your robot if you want to try it.

This program has to initialize both TensorFlow and April Tag vision processors. So we start by creating a initVisionPortal function. We then call both initTfod and initAprilTag to initialize the vision processors. We then build the vision portal.

initTfod will set a custom TensorFlow model filename along with the labels for the model. We also set the minResultConfidence to 0.7

initAprilTag is pretty basic. But we do set decimation, which is explained in the comments. This is also the function where we would set lens intrinsics if we had needed to calibrate our webcam.

Step 1 of this program is to detect the Team Prop and determine which spike mark to drive to. If we do fine a Team Prop, we check the position of the bounding box. If the Left value is 10 or less then the Team Prop was found on the left spike mark and set set location=left. Otherwise we check if the Left values is less than 400 which means the location=center. Otherwise we assume the detection location=right.

If we didn’t detect a Team Prop we assume the location is centre. The detectProp function has a three second timeout.

The next step is to use the location value to drive forward. If location=center then we drive straight forward, backup to drop off the spike mark, then turn to face the backdrop. Code for the other locations was not actually implemented.

Next the program assumes the robot is facing the backdrop, so now it can the April Tags on the backdrop to drive. We start by change the webcam exposure and gain so that we reduce motion blur since we’ll be detecting April Tag while moving. Because our starting location was blue alliance, the April Tag IDs on the blue backdrop are 1, 2, or 3 and we pick the one corresponding to the detected Team Prop.

The last step to to drive using the RobotAutoDriveToAprilTag logic. In this case we copied the main driving logic of the sample program into an aprilTagDrive function. This function loops until the rangeError is < 2 inches. In practice the robot stopped moving with a range error of 1 inch which was close enough.

That’s only part of the function, the remainder calculates the various error values and uses them to call the moveRobot function just like the sample program.

Finally, the program uses encoder based movement to move closer at a slow speed so as not to bump the backdrop hard. The robot attempts a slight left turn to try and line up straight on the backdrop.

This is where the robot could deploy a yellow pixel if it was able to. The robot should also not park in front of the backdrop, but the program stops there.

Here’s a video of Pushbot running this program.

Next Steps

Complete the program by implementing driving logic if the Team Prop is on the left or right spike marks.

Then create copies of this program to work from all four starting positions. On the Red Alliance side of the field the backdrop has different April Tag ID’s that can be used to find the backdrop area.

You might want to add IMU gyroscope control to the driving from the front starting position to the backstage area.

Getting Help

It is often possible to use Google (or other search engine) to get help or solve problems. There are lots of resources online. If you’re still stuck you can ask for help here.