Thus, if a bot was asked to be a medic and save its teammates, it would have some decent behaviors to follow. I concentrated more on programming lower level behaviors into a "set of options" from which an AI could choose. One way I tried to stay out of deciding the strategy was through roles. I wanted them to figure it out for themselves. I knew I eventually wanted to use a neural net of some kind so what I didn't simply program my own tactical biases into the robots. A variety of options are available for aggressive or more conservative defensive postures. The teams had not only to pick formations, but whether to move, where to move, whether to wait for the other side to come, take cover, etc. A wedge formation is tricky if all the sudden the whole wedge needs to pivot. A circle formation (or any other) is different depending on how many bots are on the team. Once a formation is picked, there are the particulars of how each bot should navigate to its position within the formation, hold formation, etc.
There was a lot of work involved to decide what formation to adopt based on an opposing teams position and strength. The leaders decide on strategy and formations and communicate that to their teams. This took a bit of programming, but I eventually got it so I could organize many bots into multiple sides, with multiple teams on each side, and multiple bots on each team.Ī single bot could be a leader, a follower, a scout, a medic, or a casualty. Sides, Teams, and Individual Roles on a Team This was another relatively easy one, configurable through Unity. When an object falls on the ground, it needs to not pass through the ground. You just have to configure gravity, mass for all the objects, and make it objects respect their boundaries and don't pass through each other.
This was easy to implement as Unity has all this built in. Algos in Unity can then be used to detect "collisions" with these objects.these collisions represent sonar detections.
These invisible object represent the FoV for the sonars. I implemented sonars by creating invisible 3D objects that extend out from the bot. I went with something that was faster, as I knew I would have lots of bots running at the same time. There are other ways to do this, I could have gotten the camera view from the actual position of the cam on the bot and sent that image to some vision processing algos. By finding the 8 "corners" and/or the center points of the bounding boxes around other objects and doing some ray tracing, it can be determined whether there is a line of sight to something or not. I implemented simulated vision by using ray tracing algos built into Unity. It is also missing its right ear and left hand, which are lying around somewhere in the scene. In the shot above, the robot is reattaching on of its lasers that had fallen off.
I used a combination of buttons, sliders, and code to implement all the capabilities of the robot. Next, I needed to build a UI for driving the robot, moving all the "virtual" servos, and firing the lasers. This took some hours, but eventually I got it. Once all the parts for Ava, I had to carefully align all the axes to where the axes of rotation were on the servos in the bot. The robots were created by importing 3D models I had for one of my actual robots (AVA, here on LMR). This involved careful pathfinding and obstacle avoidance. At times, I wanted them to navigate up some small ramps, through the doors and cabin, and out the other side. The mission of the bots is to protect the cabin. I also put a few other objects of interest in this world, like the gray wall in the pic below. I made some areas of the terrain flat and others very bumpy to provide cover and challenges when driving over them. I started by creating a small piece of land with some hills around the edges and a small cabin and some trees in the middle. I did all this using a Unity frontend in C#. The first step was to create a virtual environment for the robots.