User:Wdb

From SpaceElevatorWiki.com
Jump to navigationJump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Tipps:

There are answers to common questions about TORCS: [1]

In the PDF-Version of the TORCS tutorial (C++) are bugs (Wrong images, missing chapters), so use the online version as reference.

It is very easy to create tracks for TORCS. We can do it in two ways, with surroundings and track only. This could help while testing the vision part. So we are able to produce situations we want to analyse in short tracks.

To create objects along the track, I used AC3D. It is intuitively usable, much faster to learn than blender.

TORCS uses only one cpu, with a multi processor board we allways have 50% taken from TORCS (to get as many fps as possible).

How to drive a torcs car with a robot

After initializing all data, TORCS calls the drive function of the robot for each driving timestep. To drive a TORCS car, the robot has to set the following values:

  • Acceleration
  • Brake pressure
  • Clutch
  • Gear
  • Steering angle

These values are part of the race command structure holding the data to be send to TORCS in the Drive-API-Call.

Acceleration, Brake pressure and Clutch are normalized to be values from 0 to 1. The corresponding acceleration is calculated from the engines data, read from a cartype specific setup file. The max brake pressure is initially set in this cartype specific setup file too, but may be redefined in a driver specific setup file merged with the initial settings of the cartype. The steering angle is scaled to be in -1 +1 range. The scaling is done with a cartype specific constant, the steerlock value. The Gear numbers include the reverse gear (0), the neutral gear (1) and than the other gears (2 = first, ...). The total number of gears is also cartype specific.

What to learn from it?

  • To be sure, that we take the same values like TORCS, we have to read it from the normal TORCS-setup files (XML-Format). The merging of cartype + driver +... setup files is controlled by min and max values, giving the allowed range. You can not define a driver specific value out of range.

Because all these parameters are known, we can implement the reading in the wrapper using the handles to the ready merged setup files provided to us by TROCS. We can set this values while our DriverBase.Init call in our C#-Objects using our TStaticData structure.

It is practice to have additional parameters in the drivers setup file, only used from the driver. Here we are free to define all we need and to read it with the C#-code!

  • To get a replay, we only have to save Acceleration, Brake pressure, Clutch, Gear and Steering angle off all cars in the race.
  • These values will come from our code, so we are free, where and when to save it (Our Dispatcher handles all cars). TORCS can reproduce a race exactly, as long as no random input came from the cars (like uninitialized variables used in some robots!)


Basic driving function of Sharpy

The methods used for the first implementation of the SharpyCS driver are very basic, but they are a good base for driving on unknown tracks!

TORCS provides the information about the hole track to drive on while the InitTrack-API-Call. It is separated to segments of unique type (Straight, Turn to left, Turn to right).

All the bots used for the endurance world championship take and evaluate it to get a precalculated racingline. The most robots look at the track beeing of constant width, as the main track TORCS provides is indeed. But there are additional sides, having different chracteristics than the main track. This sides can have different start width and end width, individual friction and so on (based on the segments). The best robots use this additional width under some conditions.

All the functions used there are not of interest, if racing on a (partly) unknow track.

The method used for a basic steering of Sharpy, works with the current segment and the „visible“ part of the following segments. As intentionally used for racing, it drives in the middle of the road (so the lateral offset is zero).

The basic steering angle is calculated from the current postion to a target point, some way in front of the car. It is corrected by the yaw angle of the car. If the car is at a side of the track, it drives back to the middle. If it drives to a turn, the point used as target goes to the turn's inner side. This results in driving to the inner side of the turn. How much depends on the way used to look ahead. This „look ahead distance“ is corrected by the car's current speed. So in short, fast turns, it goes closer to the edges, in long, slower ones it stays more in the middle.

To make a robot drive not in the middle of the road, you have to add a lateral offset to the target point. The same is used, while overtaking and collision avoiding. If an opponent is near in front, the robot has to make a decision: Stay behind or overtake. If possible it will try to overtake by increasing the lateral offset to the better side. Better means here, to be at the inner side of the current or next comming turn.

Beeing on a long straight, the bot has to look a long way in front, but in this case, the next turn will be visible, indeed. In a turn, it uses the current situation (the current segment) to make the decision. So we can assume, that this approach will work, even if the track is unknown and we use only visible parts.

Assuming that we are racing, we allways try to accelerate as much as possible. But how much is possible? Here we start at the current segment too: Driving in a turn, the current segment's radius gives us the limit of the speed we can use (together with segment's and tyre's friction and other parameters of the car). If we are faster, we have to brake! If not or if we are on a straight, we have to look ahead. Again assuming that we are in a race (having no cars comming from the other side), we can calculate the way we would need to stop (speed1 = current speed, speed2 = 0). This gives us the max. distance to look ahead. Now we look from segment to segment in front, till we find a segment, limiting the speed. If there is one, we compare the distance to it with the brakedistance needed to slow down to the limiting segment's possible speed. If the brakedistance becomes greater, we brake. Here we assume, that we can look far enough. If we are in a situation, not seeing far enough, we can modify that method by replacing the brakedistance by the visible distance (or the half of it, if not in a race). Again, it would work (as base priciple).

With this simple methods, we allways use the brakes maximum pressure if braking. The faster bots don't use this binary approach. The used brake pressure is adjusted by the traction circle, the situation (opponents close), the lateral offset (beeing on the outer side) and other parameters. This is done while calculating the brakedistance, so we can combine it later.

For collision avoiding, a filter is used, to correct the basic brakepressure, if an opponent comes to close. the reaction is made, using the nearest opponent's data.

Avoiding cars aside

Our basic Sharpy has the CalcOffset method, to get the lateral offset from the middle of the road. Here we first check, wether there are cars aside. If so, we have to check wether there is more than one car aside.

If it is one car, we move to the other side of the track. If there are two cars, we check wether we are in the middle. If so we steer to the middle between both opponents, if not we go to the outer side of the track.

If there are more than two cars aside, we look only to the two next of us!

Here we mixed two offsets: One is were we are, the other is were we want to steer to! ToDo: To get better results (lower lap times, less damages) we will calculate the change of offset and use methods to control the speed of change. To get a smoother drive, we will replace the "middle of the track" line by the "main line of the track" and use the both distances to left and right instead of the track's width.

What to learn here?

We can see, what our robot needs to get from the "motion planning". The "main line" of the "track" and the distances to the sides, but in 3D! This info we want as long as posible in front of our car.

For an opponent we have to know were it is and to where it moves how fast, yawing or not. At the moment we use a fixed car with and car length (same for opponents and own car), but later we will have to deal with different dimensions.

To be able to calculate the possible speed along our main line, we need assumptions concerning the friction (of track and tyres). Let's assume, our car has no wings!


Facts

Here some facts:

1. TORCS gets all information needed to display the world from ASCII files in the 3D-format "ac". The track is generated from the helper programm trackgen reading the track's definition from track.xml file and is written to a track.ac file. This can be modified with a program like AC3d (or blender) or an programmed object generator we want to use to fill our world. The result is the world without the cars. It is static!

2. TORCS gets all Information to display cars from ASCII files in the 3D format "ac". They can be designed with AC3d or blender. The properties are set in xml-files (cartype or category, car and driver). It is static too!

3. TORCS displays the drivers view to the world and other drivers by moving a camera through this world, placing the cars at correct positions in adjusted directions.

4. Our wrapper knows all we need! All we need is the position and rotation of the drivers camera(s) and the position and rotation of the opoonents. This is content of the data we get from our wrapper. The rest we can initially read from the ASCII files as TORCS does!

To see an AC file, check out these tools. If you get a binary running on on Linux, check it into our repo! http://www.inivis.com/resources.html

Proposal

Lets write a separate C# programm able to read the ASCII files and move cameras in this world. Let's use our wrapper's partner at the C# side (the Dispatcher object) to send the cameras and opponents position and rotation to it. That's all we need.

More facts

TORCS does the simulation in the simuV2 or simuV3 library (dll/so). The rest is stuff: Reading files and calling the library (or unneeded like sound and display, things we can neglect.

Another Proposal

Instead hacking TORCS we should write a simple programm, calling the simuv2/v3 to get fast results. Later we can replace it by a own physics modell using the same interface.

Advantages

We will have a clear segregation of functions. The interfaces needed are known (let's do it as TORCS does for a version 1.0) We dont' have to write code dealing with different operation systems. All code can be written for mono. We are free to do all like it is best for us and our later objectives. We can use different modules parallel (i.e. our new physics (as dll/so) and the simuVx libraries) to compare the results. All later needed modules (extended motion planning, recorders, learning loops with random noise generators to simulate imperfect sensors etc.) will not have interfaces to TORCS. This means, we dont' have to write any line of code temporarily used.

Keith's thoughts

Setting aside graphics, torcs is race management / UI and physics / simulation.

We can attack each problems separately. We can plug in a new simulation engine we call from old torcs, or write new torcs calling old simulation engine.

However, we should have Ogre / Tao up and running before we re-write the management / UI portion. We need a C# solution for creating widgets and drawing text, etc. What format we store tracks and cars in is influenced by Ogre.

Feedback needed

To check wether I understood all right, let's look at the system from different points of view! Please let me know, if I missed or misunderstood someting.

File:Dataflow.odg


Let's take TORCS point of view first:

TORCS is a road based car simulation (not an offroad one). Till now TORCS uses only one road (track), beeing a closed loop. All it needs, is the information about the track's geometry and properties (coming from track.xml, it contains no meshes, but the segment's parameters like type, length, friction etc.). Track.ac or track.acc (holding meshes) is only used to display the TORCS world to a gamer. A track is build from segments of variable length. In cross section a track is build from a main segment in the middle and side segments at each side.

Example: Left barrier, left side, left border, main segment, right border, right side, right barrier.

For pitting in a race, TORCS provides only one type of pitlane, beeing a side of the main track with a fixed lateral offset.

Example: Left barrier, left side, left border, main segment, wall, pitlane, door of pit)

It is up to the robots to switch between the main track and the pitlane at the correct places (and back). For the physics simulation it is like normal driving on a side segment. Each segment (and side segment) can have it's own parameters for the surface (friction, rolling resistance ...)

Example: Left barricade: Tirewall, Left side: sand, main segment: asphalt, right side: grass, right barricade: fence

Till now, all the main segments of a TORCS track have the same width. The side segments usually have individual width, defined at the segement's start and end. Changing the side segments width from start to end is done linear.

The TORCS simulation doesn't know obstacles except other cars. The collision detection from TORCS is working with boundary boxes for the cars and for the wheels of a car, rolling on the track. To get obstacles, we had to park cars on the track (ok, we are free to give it a form we want it to have, but for TORCS it has to be a car).


Between our code and TORCS, the dispatcher

The dispatcher splits the information got from TORCS and sends it to the relevant parts. The current position and rotation of our car and the opponents (including "parking cars" as obstacles) is send to the world viewer. Informations about our fuel, brake temperatur etc. it sends to the AI part (rules, learning) and to the driver. The driver gets informations from the onboard sensors (engines rpm etc.). From the driver it gets back the commands to send back to TORCS. While learning, we also can send information to other parts, to get feedback about the quality of our single steps and to help to work independent.

Next point of view: World viewer

Assuming that we want a vision system to do the "Object Detection" later (instead of gamers), we have to separate the world into two parts. The first part is all, what the "Object Detection" should recognize and classify correctly. This is our track, lanes on the track, opponents (driving cars), obstacles (parking cars), traffic signs (and later crossings etc.). The second part is all the stuff building the surroundings. It is just background for the "Object Detection" making the recognition harder.

If we use a own world viewer, we are free to do it like we want, except the track and the track's boundaries. If we don't want to use the track.ac file for version 1.0 to get the track's mesh, we have to write a new trackgen saving the data in an other format, but this are details we shouldn't think about now. Instead looking for a 3D-programm (blender, AC3D, K3D, ...) to creat objects manually we should think about the Object generator, to do this work for us, based on parameters (Building size in X,Y,Z, Traffic sign's type (Stopp, Turn left only, ..., Parking cars along the track from to etc.). Let's use a target file format, used by the world viewer and later by an own physics modul, but let's create things as we can do it fast and transform it to the target format if needed.

To make our work usable to real problems, the visualization should look "realistic". But what means realistic here? One point is the light. It should be possible to use different light (position, intensity, color etc.). If we assume, we would use HDRC cameras (High Dynamic Range) with a wide range of light intensities mapped nonlinear to a given number of steps (8 Bit, 10Bit, ...), we have to use special settings for our world viewer to display the image, like such a camera would see it. Here realisic doesn't mean natural as seen by men's eyes!


The Object Detection's view

The Object Detection has to find all relevant information from the images taken from the world viewer. Relevant is where we are in relation to the track, the opponents, the obstacles and what additional information we have to know (traffic signs, crossings etc.). Also we need estimations about properties of the surface (asphalt, sand, grass etc.). To make it easier to find the objects it has to use feedback from the next step, the Object Tracking.


View of the Object Tracking

It is to help the Object Detection to find known objects in the next time step faster. Here we will have the possibility to use information of our driver about the steering, acceleration, braking since the last (vision) timestep. So we can setup the detetion with estimations of our translation and rotation relative to the last images. Also we have to decide, wether our car will get additional information from other sensors (Kompass, GPS, radar, parking distance sensors etc.).


View of the AI modules (rules, learning)

The AI modules get all information about our position, rotation and movement relative to the world (track) and opponents, obstacles. Also it will get informations from traffic signs, crossings, or parking lots). Here we will decide, what we want the robot to do (Racing, pitting, parking, ..., slipstreaming, overtaking, avoiding, ...). Later we will have the possibility to switch between possible tracks to come from the current position to the target.


Motion planning's view

At this point we know all (what we want to use). We know where we are and what our current movement is (relative to the world and other relevant things). Also we know what we want the robot do to, where to go to, etc. From now on, we will not longer add information but reduce it to what is needed for the following steps. First we will have to give the known information about the track and obstacles (opponents), our position, rotation and movement to the track generator. Later we can switch here between road based and offroad driving as well. This is, why I splitted Motion Planning and Track generator. For offroad driving, we would have to use an other module.


View of the Track Generator

Here we try to estimate the parameters of the track, used later for the driver. Assuming we are on the road, we had to get back the information we got from TORCS but don't use. In our estimation (model of the near world), we will calculate a racingline to be prepared to all, what might come up. If not racing, it would be using the right lane to be prepared to turn at the next crossing, to be able to park in the next parking lot etc. The model is kept until we got newer informations from the former steps.


The drivers view

The driver has to decide in realtime (50 times a second), how to steer (steering angle, acceleration, brake pressure, gear selection, clutch) the car. Instead of the TORCS information it will use the model provided from the track generator. It got additional information based on the rules, wether to try to overtake or to keep slipstreaming etc. Also it uses the on board sensors (engines rpm etc.).


And than we are back to out TORCS interface.

For version 1.0 I think we can use the UI from TORCS to start a race.

What I tried to reach is, that the modules can be programmed and work relative independent (and be replaced with an other approach if wantetd).

I'm not familar with multi threading/processing under linux or mono. With windows I would be able to make a fast and robust interface usable with multi threading or processing was well. Is there one of use, who knows something about it with mono/linux?