Software Architecture

From SpaceElevatorWiki.com
Jump to navigationJump to search

Codebase Analysis

Codebase_Analysis

Multithreading

Multithreading

Architecture

File:Dataflow.odg

Make changes to that file while doing your planning.

Any box in there can be broken down into smaller boxes. Some of the separate boxes might eventually belong inside one another, or otherwise be moved around. It will evolve as our understanding and plans evolve.

Let's not create another diagram until that one is crammed to the brim!

Overview of modules

There will be 3 main modules:

Driving

Driving module will use an interface like this of Torcs:

  • Input: from the Vision and world simulator
  • Output: high level orders to the car simulator

Driving Hard Problems (Motion Planning)

  • It seems like we encode rules like: standard practice is to trail a car by 2 seconds
  • And then we mention things like avoid other cars, don't make 3-G turns. The question is, how are those rules encoded?
  • What about the situation where we want to make notice of things for later, like the location of potholes. Assuming the vision has decided that is a pothole, how do we add that to map, remove it when we go by because it has been fixed? In a racing game, we should make note about a jump so that we don't go off a cliff. Supposed we would just drive the car through the map to have it gather data. What is the maximum speed it could go through the map the first time?

Codebase_Analysis#Motion_Planning

http://cig.dei.polimi.it/?cat=4

http://youtube.com/watch?v=ruHzCF3CHIA

Vision

Vision module I/O:

  • Input: screenshot of the world (real or virtual).
  • Output: object geometry, etc. and other information necessary to the AI.

Vision Hard Problems

OCR Is Google's Tesseract [1] supersetted by Intel's OpenCV Library? (Aleks)

What is the roadmap for how we will use more and more of the Intel logic? (Aleks)

  • For M1 demo, how do we focus it on only finding cars in front of it? Just getting the pipeline running will be work. We will need to configure the right brightness constants so it knows what it is looking at, for example...
  • What about doing training and object detection? Their website says we can hand it lots of pictures of cars, and non-cars and it will start to figure things out. How is that data stored? Can we tweak it?
  • If we know the size of the car, you could use that to estimate distance without having 3-D images to deal with. (We should sketch out 3-D on the roadmap...)
  • I think we should do blob first, and then attack objects. It might be that blobs are useless, and object recognition is much better. Fine, then blobs will only be used for a few days ;-) We should be working our way up.
  • Radar is another type of input to a visual system. It has mostly less data than a 3-D image, but we could still build filters which tweak it and call the Intel code in the different way to get it to make sense. For now, let's focus only on 2-d and 3-d images, and we can do radar, sonar, etc. later.
  • If we have both visual and radar, how do we build a system which synthesizes the best of both?

World Simulator

World simulator I/O:

  • Input: Progression of time, instructions to the car
  • Output: visual output for the user, screenshots for vision module, APIs to find out where stuff actually is. A warning when our car has "crashed", which means there is a bug in the vision or driving logic.