Goals: Difference between revisions

From SpaceElevatorWiki.com
Jump to navigationJump to search
No edit summary
Line 10: Line 10:
* Stress it by creating various terrain. (40 hours)
* Stress it by creating various terrain. (40 hours)
* Hook the '''output''' of the vision recognition into the '''input''' of the driving engine! (40 hours)
* Hook the '''output''' of the vision recognition into the '''input''' of the driving engine! (40 hours)
* (For debugging purposes, hook the output of the vision recognition into the input of the 3-D engine, by creating ghost images.)  
 
Debugging features:
* hook the output of the vision recognition into the input of the 3-D engine, by creating ghost images.)  
* Ability to log all changes to the world including results of vision analysis, so that we can replay to figure out what happened.


== Later Goals ==
== Later Goals ==
* Add other features like waypoints.
* Add other features like waypoints, stopsigns. (Someone else doing this we should hook up with)
* Work on urban driving rules like stopsigns. (Someone else doing this we should hook up with)


'''Better 3-D Engine?'''
'''Better 3-D Engine?'''
* Evaluate better & more professional-looking 3-D engines.
* Evaluate better & more professional-looking 3-D and physics engines

Revision as of 00:45, 10 June 2008

Vision:

  • Get Intel code compiling on Ubuntu and into a Mercurial tree. (20 hours)
  • Hook up .Net wrapper to Intel vision code. (20 hours)
  • Create / find a 2-frame GUI tool which takes a picture and applies Intel code to it and displays an output image of its analysis. (40 hours)
  • Hook the input of vision recognition to the output of the 3-D engine! (40 hours)

Driving:

  • Create a C# wrapper for the Torcs driving APIs. (Maybe Bernhard or someone from Torcs) (100 hours)
  • Port smart baseline driving code to C#. There is lots of driving code to write, so we need this code to be clean. (100 hours)
  • Stress it by creating various terrain. (40 hours)
  • Hook the output of the vision recognition into the input of the driving engine! (40 hours)

Debugging features:

  • hook the output of the vision recognition into the input of the 3-D engine, by creating ghost images.)
  • Ability to log all changes to the world including results of vision analysis, so that we can replay to figure out what happened.

Later Goals

  • Add other features like waypoints, stopsigns. (Someone else doing this we should hook up with)

Better 3-D Engine?

  • Evaluate better & more professional-looking 3-D and physics engines