User:Wdb: Difference between revisions

From SpaceElevatorWiki.com
Jump to navigationJump to search
 
(17 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Issues List ==
<issues project = "Wdb" search = "false" filter = "false"/>
== ''Additional pages:'' ==
== ''Additional pages:'' ==


[[Help:TORCS_robot_driving|How to drive a TORCS car from a robot]]
[[WDB_Collecting_Ideas|Let's collect ideas of what to implement in addiditon to the features of the current TORCS]]
 
[[TORCS_robot_driving|How to drive a TORCS car from a robot]]
 
[[Overview_of_modules|Motion planning]]
 
[[Pitting|Pitting]]


== ''Tipps:'' ==
== ''Tipps:'' ==
Line 15: Line 23:


TORCS uses only one cpu, with a multi processor board we allways have 50% taken from TORCS (to get as many fps as possible).
TORCS uses only one cpu, with a multi processor board we allways have 50% taken from TORCS (to get as many fps as possible).
The screen's resolution used by TORCS can bee selected in the "Options - Display" menu. If the wanted values are not in the list, you can add them in the configuration file TORCS\config\screen.xml (Screen Properties x, y, and/or window width, window height)


== '''Facts''' ==
== '''Facts''' ==
Line 62: Line 72:


However, we should have Ogre / Tao up and running before we re-write the management / UI portion. We need a C# solution for creating widgets and drawing text, etc. What format we store tracks and cars in is influenced by Ogre.
However, we should have Ogre / Tao up and running before we re-write the management / UI portion. We need a C# solution for creating widgets and drawing text, etc. What format we store tracks and cars in is influenced by Ogre.
== '''Feedback needed''' ==
To check whether I understood all right, let's
look at the system from different points of view!
Please let me know, if I missed or misunderstood someting.
[[Image:Dataflow.odg]]
== Let's take TORCS point of view first: ==
TORCS is a road based car simulation (not an offroad one).
Till now TORCS uses only one road (track), being a closed loop.
All it needs, is the information about the track's geometry and properties (coming from track.xml, it contains no meshes, but the segment's parameters like type, length, friction etc.).
Track.ac or track.acc (holding meshes) is only used to display the TORCS world to a gamer.
A track is build from segments of variable length.
In cross section a track is build from a main segment in the middle and side segments at each side.
Example:
Left barrier, left side, left border, main segment, right border, right side, right barrier.
''(What is the difference between border and barrier?) Remove this comment when answered''
For pitting in a race, TORCS provides only one type of pitlane, being a side of the main track with a fixed lateral offset.
Example:
Left barrier, left side, left border, main segment, wall, pitlane, door of pit)
It is up to the robots to switch between the main track and the pitlane at the correct places (and back). For the physics simulation it is like normal driving on a side segment.
Each segment (and side segment) can have it's own parameters for the surface (friction, rolling resistance ...)
Example:
Left barricade: Tirewall, Left side: sand, main segment: asphalt, right side: grass, right barricade: fence
Till now, all the main segments of a TORCS track have the same width.
The side segments usually have individual width, defined at the segement's start and end.
Changing the side segments width from start to end is done linear.
The TORCS simulation doesn't know obstacles except other cars.
The collision detection from TORCS is working with boundary boxes for the cars and for the wheels of a car, rolling on the track.
To get obstacles, we had to park cars on the track (ok, we are free to give it a form we want it to have, but for TORCS it has to be a car).
== Between our code and TORCS, the dispatcher ==
The dispatcher splits the information from TORCS and sends it to the relevant parts.
The current position and rotation of our car and the opponents (including "parked cars" as obstacles) is send to the world viewer.
Informations about our fuel, brake temperature etc. it sends to the AI part (rules, learning) and to the driver. The driver gets informations from the onboard sensors (engines rpm etc.).
From the driver it gets back the commands to send back to TORCS.
While learning, we also can send information to other parts, to get feedback about the quality of our single steps and to help to work independent.
== Next point of view: World viewer ==
Assuming that we want a vision system to do the "Object Detection" later (instead of gamers), we have to separate the world into two parts.
The first part is all, what the "Object Detection" should recognize and classify correctly.
This is our track, lanes on the track, opponents (driving cars), obstacles (parking cars), traffic signs (and later crossings etc.).
The second part is all the stuff building the surroundings.
It is just background for the "Object Detection" making the recognition harder.
If we use a own world viewer, we are free to do it like we want, except the track and the track's boundaries. If we don't want to use the track.ac file for version 1.0 to get the track's mesh, we have to write a new trackgen saving the data in an other format, but this are details we shouldn't think about now.
Instead looking for a 3D-programm (blender, AC3D, K3D, ...) to create objects manually we should think about the Object Generator, to do this work for us, based on parameters (Building size in X,Y,Z, Traffic sign's type (Stopp, Turn left only, ..., Parking cars along the track from to etc.). Let's use a target file format, used by the world viewer and later by an own physics modul, but let's create things as we can do it fast and transform it to the target format if needed.
To make our work usable to real problems, the visualization should look "realistic". But what means realistic here?
One point is the light. It should be possible to use different light (position, intensity, color etc.).
If we assume, we would use HDRC cameras (High Dynamic Range) with a wide range of light intensities mapped nonlinear to a given number of steps (8 Bit, 10Bit, ...), we have to use special settings for our world viewer to display the image, like such a camera would see it. Here realisic doesn't mean natural as seen by men's eyes!
An other point is the size of the visible part of the world. To be perfect, we should have a 360 deg round view or more cameras (to be able to drive fair beeing lapped, to know when to come back to the racingline while overtaking/overlapping etc.).
== View of the AI modules (rules, learning) ==
The AI modules get all information about our position, rotation and movement relative to the world (track) and opponents, obstacles.
Also it will get informations from traffic signs, crossings, or parking lots).
Here we will decide, what we want the robot to do (Racing, pitting, parking, ..., slipstreaming, overtaking, avoiding, ...).
Later we will have the possibility to switch between possible tracks to come from the current position to the target.
== Motion Planning's view ==
At this point we know all (what we want to use).
We know where we are and what our current movement is (relative to the world and other relevant things).
Also we know what we want the robot do to, where to go to, etc.
From now on, we will not longer add information but reduce it to what is needed for the following steps.
First we will have to give the known information about the track and obstacles (opponents), our position, rotation and movement to the track generator.
Later we can switch here between road based and offroad driving as well.
This is, why I split Motion Planning and Track Generator. For offroad driving, we would have to use another module.
== View of the Track Generator ==
Here we try to estimate the parameters of the track, used later for the driver.
Assuming we are on the road, we had to get back the information we got from TORCS but don't use.
In our estimation (model of the near world), we will calculate a racingline to be prepared to all, what might come up.
If not racing, it would be using the right lane to be prepared to turn at the next crossing, to be able to park in the next parking lot etc.
The model is kept until we got newer informations from the former steps.
== The drivers view ==
The driver has to decide in realtime (50 times a second), how to steer (steering angle, acceleration, brake pressure, gear selection, clutch) the car.
Instead of the TORCS information it will use the model provided from the track generator.
It got additional information based on the rules, whether to try to overtake or to keep slipstreaming etc. Also it uses the on board sensors (engines rpm etc.).
== And than we are back to our TORCS interface. ==
For version 1.0 I think we can use the UI from TORCS to start a race.
What I tried to reach is, that the modules can be programmed and work relative independent (and be replaced with an other approach if wanted).

Latest revision as of 12:39, 1 November 2008

Issues List

<issues project = "Wdb" search = "false" filter = "false"/>

Additional pages:

Let's collect ideas of what to implement in addiditon to the features of the current TORCS

How to drive a TORCS car from a robot

Motion planning

Pitting

Tipps:

There are answers to common questions about TORCS: [1]

In the PDF-Version of the TORCS tutorial (C++) are bugs (Wrong images, missing chapters), so use the online version as reference.

It is very easy to create tracks for TORCS. We can do it in two ways, with surroundings and track only. This could help while testing the vision part. So we are able to produce situations we want to analyse in short tracks.

To create objects along the track, I used AC3D. It is intuitively usable, much faster to learn than blender.

TORCS uses only one cpu, with a multi processor board we allways have 50% taken from TORCS (to get as many fps as possible).

The screen's resolution used by TORCS can bee selected in the "Options - Display" menu. If the wanted values are not in the list, you can add them in the configuration file TORCS\config\screen.xml (Screen Properties x, y, and/or window width, window height)

Facts

Here some facts:

1. TORCS gets all information needed to display the world from ASCII files in the 3D-format "ac". The track is generated from the helper programm trackgen reading the track's definition from track.xml file and is written to a track.ac file. This can be modified with a program like AC3d (or blender) or an programmed object generator we want to use to fill our world. The result is the world without the cars. It is static!

2. TORCS gets all Information to display cars from ASCII files in the 3D format "ac". They can be designed with AC3d or blender. The properties are set in xml-files (cartype or category, car and driver). It is static too!

3. TORCS displays the drivers view to the world and other drivers by moving a camera through this world, placing the cars at correct positions in adjusted directions.

4. Our wrapper knows all we need! All we need is the position and rotation of the drivers camera(s) and the position and rotation of the opoonents. This is content of the data we get from our wrapper. The rest we can initially read from the ASCII files as TORCS does!

To see an AC file, check out these tools. If you get a binary running on on Linux, check it into our repo! http://www.inivis.com/resources.html

Proposal

Lets write a separate C# programm able to read the ASCII files and move cameras in this world. Let's use our wrapper's partner at the C# side (the Dispatcher object) to send the cameras and opponents position and rotation to it. That's all we need.

More facts

TORCS does the simulation in the simuV2 or simuV3 library (dll/so). The rest is stuff: Reading files and calling the library (or unneeded like sound and display, things we can neglect.

Another Proposal

Instead hacking TORCS we should write a simple programm, calling the simuv2/v3 to get fast results. Later we can replace it by a own physics modell using the same interface.

Advantages

We will have a clear segregation of functions. The interfaces needed are known (let's do it as TORCS does for a version 1.0) We dont' have to write code dealing with different operation systems. All code can be written for mono. We are free to do all like it is best for us and our later objectives. We can use different modules parallel (i.e. our new physics (as dll/so) and the simuVx libraries) to compare the results. All later needed modules (extended motion planning, recorders, learning loops with random noise generators to simulate imperfect sensors etc.) will not have interfaces to TORCS. This means, we dont' have to write any line of code temporarily used.

Keith's thoughts

Setting aside graphics, torcs is race management / UI and physics / simulation.

We can attack each problems separately. We can plug in a new simulation engine we call from old torcs, or write new torcs calling old simulation engine.

However, we should have Ogre / Tao up and running before we re-write the management / UI portion. We need a C# solution for creating widgets and drawing text, etc. What format we store tracks and cars in is influenced by Ogre.