Virtual Prototyping starts where CAD ends. The CAD objects designed by the CERN engineers are imported into a Virtual Environment, then optimised and treated for their surface properties (colour, material, texture, transparency etc.), and finally organized in a "Virtual World". The viewer is immediately able to fly through this World and explore it from the inside.
Virtual Prototypes are an ideal replacement for the wooden models traditionally built for the past CERN machines, as they are generated directly from the EUCLID CAD files, therefore they preserve the original accuracy, they can be updated in a matter of minutes, and they allow immersive visualization, in any preferred scale, through any preferred navigation metaphore. Because the flythrough is performed using one of the numerous off-the shelf packages available on the Virtual Reality market, we are free to add any of the navigation peripherals supported by that package at no extra effort. At present the VENUS lab allows three navigation metaphores:
The second metaphore (joystick and video projector) is used for group visualization. One person controls the flight through a joystick while a number of viewers can follow on a large stereoscopic screen, wearing polarized glasses. The stereo projector is installed in a conference room adjacent to the VENUS lab, where design work teams meet and discuss the development of the models. The large screen (160x160 cm) allows more detailed viewing and the stereo vision allows a better understanding of depth and volumes, particularly with semi-transparent or wireframed objects.
The third metaphore is the classic Virtual Reality approach to immersive navigation. A VR helmet places you inside the model, giving you a realistic perception of proportions. By moving your head around you can explore the virtual worlds as if you were really walking through it. Translations of your body in the virtual environment are controlled through forward and backward movement buttons on the 3-D joystick.
The LHC Virtual Prototypes are designed in EUCLID, then exported in Wavefront format to VENUS' SGI Onyx RealityEngine2, where they are visualized using Medialab's CLOVIS Virtual Reality package, or our 3D Web browser i3D
The territory elevation curves, as well as the future surface buildings, are modelled in Euclid, then exported the same way as the Virtual Prototypes, to the Onyx. Here the land model is "dressed" with houses, fields, trees, roadsigns and other details, by pasting textures obtained by scanning photos and post-processing them on a Mac using Adobe Photoshop.
The result is a Virtual World that you can fly though and observe from any possible angle, using the software and metaphores described above. While moving you can also interact with the environment by moving the future buildings horizontally or vertically in real time, and observing the effect on the landscape, until you find the best solution. You can also add at a keyclick hills and forests, or watch the trees grow , in order to establish a planned landscape architecture over time.
Technically speaking, a territorial model consists of a wire frame model of the terrain planimetry, onto which air photos of the land are mapped. Buildings and other man-made objects also consist of photos mapped onto polygons. Photos of trees can be mapped onto crossed planes (criss-cross trees). The flight is controlled by the same software used for the VENUS Virtual Prototypes project (Medialab's CLOVIS).
VENUS is currently evaluating a few software packages that could allow not only to simulate the operations, but also to perform them automatically. After converting our Virtual Prototypes to the standard accepted by these packages, we will be able to let the computer calculate for each item a collision-free path, within the degrees of freedom of our bridge cranes. This path will then be injected to a robot controller thet will drive the cranes during the build phases. This should allow us to assemble our future detectors almost without human intervention, therefore minimizing the probability of erroneous manouvers.
In order to allow interactive worldwide access to the experiments' 3-D data, VENUS has committed to develop some specialized package to provide web-based, on-screen navigation and graphic integration. This package, originally created at CRS4 by Jean-Francis Balaguer and Enrico Gobbetti, is called i3D.
I3D stands half way between a VR navigation tool and a web browser. As a VR tool it performs on line rendering and navigation, featuring also some of the VR peripherals, such as Spaceball and stereo glasses. As a web browser it supports hyperlinks, allowing to download all kinds of objects supported by the web browsers. It lets you fly around in a virtual world and click on objects you find to trigger loading of more detailed information. Also, by using hyperlinks to the virtual world representing each single detector part (hyperworlds), the LHC Virtual Prototypes actually integrate on your screen on a simple mouse click.
I3D runs today on Silicon Graphics and AlphaStations. Future development of i3D include support for specific engineering features, as well as multiplatform support and an inproved GUI.
At CERN, we could use Virtual Environments to visualise a number of theoretical physics models (Quantum Dynamics, subatomic interactions etc.), but also to produce a simulation of a detector. The Virtual Detector would produce virtual events, and place the physicist inside them to watch them happen in 3-D and in real time. These events could be either fruit of Montecarlo simulations or reconstructions from data acquisition systems. The usage of colour, sound and light and simulated fog would allow us to represent non-visible quantities, such as luminosity, magnetic fields, interactions, particle lifetime and so on.
On this ground, an initial study collaboration has started with NA49 to produce static 3D models of events. If you have i3D installed, you may want to fly though this first example.
Another study, aimed to interface i3D to GEANT4 has already started.