The virtual network is loaded from a text file, see other post regarding network parsing. Each node is represented by a white hexagon with a white outline and white lines as links.
When the AI agent enters a node the white hexagon scales down and shows a hidden red hexagon. When the white hexagon is scaled down to nothing the AI is detected and moved to it’s spawn position.
The AI agent was first a blue pyramid that was rotated upside down but was changed during the weekend to a blue sphere instead.
In the beginning the representation of detection was that the white node changed color between red and white. This was changed due to it was hard telling when the AI was detected on the node.
Since I use my graphics assignment I tried using the scene graph structure to render everything. Due to the structure of the AI and nodes I noticed that changing it would take a lot more time than just letting each network entity implement their own render function.
Update: With some feedback from my classmates I updated the node visuals.