Main > GMT_4YP_25_3
Dr Graham Treece, Department of Engineering
F-GMT11-3: Automatic transfer functions for volume visualisation
 | |
Volume rendering is potentially a very powerful visualisation tool for medical imaging and for micro-CT scans of other objects. Here it is shown with a default mapping from density to brightness. | But a good rendering is highly reliant on defining various material properties and associating them with CT values carefully. Can we just use a photo of a similar object and assign a transfer function automatically? |
Volume rendering is a way of directly visualising 3D tomographic data which is much easier to understand and investigate than viewing the original cross-sectional images. The technique has been around for a while but both the quality and rendering speed have improved considerably in the last decade, which means it can now be used even on large data sets (e.g. micro-CT) with fairly standard computers, if implemented sufficiently efficiently.
This involves designing colour maps (or transfer functions) which relate the original data values (for instance density for CT) to colour, opacity and other material parameters such as roughness or whether a material is a metal. Many different sorts of transfer functions have been proposed, which also make use of data gradients or curvatures as well as data values. However, the limit to all of these is often the complexity of defining the actual colour mapping. Standard transfer functions are usually provided which 'work' for certain scenarios. Editing these would nearly always produce a better result, even if the scenario fits, but this is often a very counter-intuitive task.
So to what extent can these mappings be automatically designed for each specific data set? For scans of a real object, some prior work has already been completed on automatically extracting appropriate colours and material shininess from a photo of the object. However, this is a complex process, and optimally extracting these material properties is yet to be achieved. Equally hard is deciding how each material should be mapped to data values: in CT, only different densities can be mapped to different materials, but there may be multiple materials with the same density. In addition, CT sees the entire 3D volume whereas a photo is only visualising the outer surface. The figure above shows an attempt to match materials based just on their relative area (in the image) compared to volume (in the CT data), but this will not work in very many situations.
This is an algorithmic development / software project, so experience of writing software is essential. C++ would be helpful, though the development could be done using another programming environment.
Click here for other medical imaging projects offered by Graham Treece.