Difference between revisions of "User:Wdb"

From SpaceElevatorWiki.com
Jump to navigationJump to search
Line 71: Line 71:
  
  
== Between our code and TORCS, the dispatcher ==
 
  
The dispatcher splits the information from TORCS and sends it to the relevant parts.
 
The current position and rotation of our car and the opponents (including "parked cars" as obstacles) is send to the world viewer.
 
Informations about our fuel, brake temperature etc. it sends to the AI part (rules, learning) and to the driver. The driver gets informations from the onboard sensors (engines rpm etc.).
 
From the driver it gets back the commands to send back to TORCS.
 
While learning, we also can send information to other parts, to get feedback about the quality of our single steps and to help to work independent.
 
  
 
== Next point of view: World viewer ==
 
== Next point of view: World viewer ==

Revision as of 14:34, 5 July 2008

Additional pages:

Let's collect ideas of what to implement in addiditon to the features of the current TORCS

How to drive a TORCS car from a robot

Overview for the modules of the new system

Tipps:

There are answers to common questions about TORCS: [1]

In the PDF-Version of the TORCS tutorial (C++) are bugs (Wrong images, missing chapters), so use the online version as reference.

It is very easy to create tracks for TORCS. We can do it in two ways, with surroundings and track only. This could help while testing the vision part. So we are able to produce situations we want to analyse in short tracks.

To create objects along the track, I used AC3D. It is intuitively usable, much faster to learn than blender.

TORCS uses only one cpu, with a multi processor board we allways have 50% taken from TORCS (to get as many fps as possible).

Facts

Here some facts:

1. TORCS gets all information needed to display the world from ASCII files in the 3D-format "ac". The track is generated from the helper programm trackgen reading the track's definition from track.xml file and is written to a track.ac file. This can be modified with a program like AC3d (or blender) or an programmed object generator we want to use to fill our world. The result is the world without the cars. It is static!

2. TORCS gets all Information to display cars from ASCII files in the 3D format "ac". They can be designed with AC3d or blender. The properties are set in xml-files (cartype or category, car and driver). It is static too!

3. TORCS displays the drivers view to the world and other drivers by moving a camera through this world, placing the cars at correct positions in adjusted directions.

4. Our wrapper knows all we need! All we need is the position and rotation of the drivers camera(s) and the position and rotation of the opoonents. This is content of the data we get from our wrapper. The rest we can initially read from the ASCII files as TORCS does!

To see an AC file, check out these tools. If you get a binary running on on Linux, check it into our repo! http://www.inivis.com/resources.html

Proposal

Lets write a separate C# programm able to read the ASCII files and move cameras in this world. Let's use our wrapper's partner at the C# side (the Dispatcher object) to send the cameras and opponents position and rotation to it. That's all we need.

More facts

TORCS does the simulation in the simuV2 or simuV3 library (dll/so). The rest is stuff: Reading files and calling the library (or unneeded like sound and display, things we can neglect.

Another Proposal

Instead hacking TORCS we should write a simple programm, calling the simuv2/v3 to get fast results. Later we can replace it by a own physics modell using the same interface.

Advantages

We will have a clear segregation of functions. The interfaces needed are known (let's do it as TORCS does for a version 1.0) We dont' have to write code dealing with different operation systems. All code can be written for mono. We are free to do all like it is best for us and our later objectives. We can use different modules parallel (i.e. our new physics (as dll/so) and the simuVx libraries) to compare the results. All later needed modules (extended motion planning, recorders, learning loops with random noise generators to simulate imperfect sensors etc.) will not have interfaces to TORCS. This means, we dont' have to write any line of code temporarily used.

Keith's thoughts

Setting aside graphics, torcs is race management / UI and physics / simulation.

We can attack each problems separately. We can plug in a new simulation engine we call from old torcs, or write new torcs calling old simulation engine.

However, we should have Ogre / Tao up and running before we re-write the management / UI portion. We need a C# solution for creating widgets and drawing text, etc. What format we store tracks and cars in is influenced by Ogre.




Next point of view: World viewer

Assuming that we want a vision system to do the "Object Detection" later (instead of gamers), we have to separate the world into two parts. The first part is all, what the "Object Detection" should recognize and classify correctly. This is our track, lanes on the track, opponents (driving cars), obstacles (parking cars), traffic signs (and later crossings etc.). The second part is all the stuff building the surroundings. It is just background for the "Object Detection" making the recognition harder.

If we use a own world viewer, we are free to do it like we want, except the track and the track's boundaries. If we don't want to use the track.ac file for version 1.0 to get the track's mesh, we have to write a new trackgen saving the data in an other format, but this are details we shouldn't think about now. Instead looking for a 3D-programm (blender, AC3D, K3D, ...) to create objects manually we should think about the Object Generator, to do this work for us, based on parameters (Building size in X,Y,Z, Traffic sign's type (Stopp, Turn left only, ..., Parking cars along the track from to etc.). Let's use a target file format, used by the world viewer and later by an own physics modul, but let's create things as we can do it fast and transform it to the target format if needed.

To make our work usable to real problems, the visualization should look "realistic". But what means realistic here? One point is the light. It should be possible to use different light (position, intensity, color etc.). If we assume, we would use HDRC cameras (High Dynamic Range) with a wide range of light intensities mapped nonlinear to a given number of steps (8 Bit, 10Bit, ...), we have to use special settings for our world viewer to display the image, like such a camera would see it. Here realisic doesn't mean natural as seen by men's eyes!

An other point is the size of the visible part of the world. To be perfect, we should have a 360 deg round view or more cameras (to be able to drive fair beeing lapped, to know when to come back to the racingline while overtaking/overlapping etc.).

View of the AI modules (rules, learning)

The AI modules get all information about our position, rotation and movement relative to the world (track) and opponents, obstacles. Also it will get informations from traffic signs, crossings, or parking lots). Here we will decide, what we want the robot to do (Racing, pitting, parking, ..., slipstreaming, overtaking, avoiding, ...). Later we will have the possibility to switch between possible tracks to come from the current position to the target.

Motion Planning's view

At this point we know all (what we want to use). We know where we are and what our current movement is (relative to the world and other relevant things). Also we know what we want the robot do to, where to go to, etc. From now on, we will not longer add information but reduce it to what is needed for the following steps. First we will have to give the known information about the track and obstacles (opponents), our position, rotation and movement to the track generator. Later we can switch here between road based and offroad driving as well. This is, why I split Motion Planning and Track Generator. For offroad driving, we would have to use another module.

View of the Track Generator

Here we try to estimate the parameters of the track, used later for the driver. Assuming we are on the road, we had to get back the information we got from TORCS but don't use. In our estimation (model of the near world), we will calculate a racingline to be prepared to all, what might come up. If not racing, it would be using the right lane to be prepared to turn at the next crossing, to be able to park in the next parking lot etc. The model is kept until we got newer informations from the former steps.

The drivers view

The driver has to decide in realtime (50 times a second), how to steer (steering angle, acceleration, brake pressure, gear selection, clutch) the car. Instead of the TORCS information it will use the model provided from the track generator. It got additional information based on the rules, whether to try to overtake or to keep slipstreaming etc. Also it uses the on board sensors (engines rpm etc.).

And than we are back to our TORCS interface.

For version 1.0 I think we can use the UI from TORCS to start a race.

What I tried to reach is, that the modules can be programmed and work relative independent (and be replaced with an other approach if wanted).