Micro-CT, part 2

I’ve written previously about using Micro-CT to extract information from small bits of terracotta ceramic (Part 1), and Part 3 will look at density in more detail. This post takes a more detailed look at the software (Avizo) I’m using to process the 10Gb voxel data, and turn it into data, or at least file formats and sizes I can work with easily. I’m going to talk about software design, making movies; extracting cross-sections; histograms data; the pore wizard and auto-skeletonise.

The basic screen of Avizo, once you open your project, typically looks a bit like the below. There’s a big old view screen on the right, a colourful flowchart of components in the middle left; detailed options for the selected modifier on the bottom left and the chart showing the relationship between the greys on screen and the voxel density at the top left.

The top left greyscale setting chart looks a little odd because I’m carrying out this work on a predetermined black 50-305 white scale. The voxels below 50 are nearly all the 3d printed mounting equipment, and I’ve set my limit at 255 above that, so there’s an easy conversion to the rbg images. In the case of this piece, there’s quite a few voxels denser than that. Something to consider. The greyscale doesn’t have to be linear either, I’ve just left it as that for simplicity.

The colourful flow chart of components in the picture is set up to demonstrate some principles, but mostly to rotate the sample. I’m using the word component as it’s the same terminology as Grasshopper and is a little less confusing then buttons, nodes, or boxes. I’m really not sure what logic Avizo uses to set the colours: Green components are ‘data’, red components are little programs that do something with data, but light blue components are also programs, orange and yellow components produce results in the visualisation field, dark green components work with yellow ones… I try to keep the flow chart tidy by keeping the green data components to the left hand side, program modules in the middle and ‘outputs’, visual or not, on the right. There’s not space for great sprawling grasshopper scripts here!

A close up of the previous image

This flowchart is relatively simple. At the very top there’s the green Open Data button, and a set of four ‘floating’ components which are just the one’s I’ve most recently used. We can ignore them, they’re just a shortcut. In the main canvas there’s the starting project data in the top left green component. It connects to an orange Ortho Slice component, which cuts a slice through the model and shows it on the visualisation. This is a default pair that are normally there when you open some data. You can click on the Ortho Slice component, and the options for what axes and how far through the model appear at the bottom left of the screen.

I added the blue rotate component, and Avizo automatically linked it to the ortho-slice too, so I can see the results. My sample had scanned in at a slight angle, and it would cause issues later on, so it’s easier to straighten it up now. This is a bit fiddly, as in the component settings you set the angle, and then on a separate place you set the 0-1 amount of that rotation to enact. It’s intended for movie making, but we don’t need that. I just fiddle around until it looks straight enough. You’d expect this to create a new data component, but no, it’s totally superficial for now. I add the red Resample Transformed Image component, and activate it. This takes all the transforms applied on the first green dataset and creates a new green dataset (in a minute or too). It’s not very intuitive, but this is the only bit of the flowchart that doesn’t flow. The rest is better.

Cutting cross sections in bulk

The labels (in brackets) are quite small, so maybe open this image in a separate tab

After the Resample transform, the black boundary of the scan is square on the screen again. I’ve added a new ortho slice off the new data, and this time it’s set to the xy plane. I’ve set the other components to hidden (that little red box on the left of each component is clicked and goes grey. If you have multiple views open, you can have different components visible in each, which may be handy).

What I’m doing is looking to do is extract a set of cross-section images to work with at my convenience. I can set the cross section as I want, and use the save button (1) above the visual window to save that out, but it’s slow and painful to do for many. Instead, what I’ve done is, in the far top left, I’ve moved onto the moviemaker tab(2). I set my Ortho-Slice component to zero in the bottom left area (3), and then click the little clock button that has appeared next to it (4). If there’s no clock, you’re not on the moviemaker tab. I see how many frames I need to output, calculate how long the movie needs to be to do that at 25fps, set the timer in the bottom middle to that (5) , change the ortho slice to the maximum (3 again), and click the clock again (4 again). This creates the blue-gray slider on the right hand side (6), that shows how that parameter will change with time. It’s a clever system that lets you do multiple things at once or in sequence. Good for presentations perhaps, but I just want a bunch of static images.

On the animation direction, I click over from the Options cogwheel (7 on the previous image) (top right, under the timer) to the Create Movie options tab next to it. I change the file format to png images, browse to where I want to save them all to, and click create movie.

(my) Frequently Made Errors:

  • The total length of the movie is set separately to the animation bars, and it won’t let you move to a time outside of it. The place to adjust it is labelled (8) on the main image. It defaults to a minute long, but try and avoid it being too long, as it takes longer to output and you just get hundreds of duplicate images from the end of the animation to delete.
  • The FPS can also be changed from 25. (9) I’ve left it as default as less things to change, and it doesn’t make the maths harder for animation length harder.
  • the output pngs will show whatever is on the viewscreen. So make sure you’re only showing what you want and make double sure the viewscreen is set to parallax and not perspective, (10) , otherwise your image will be resized in each frame, ruining the voxel-pixel-micron relationship.

Cutting a subvolume and approximating the pore network

The real advantage of MicroCT is the 3d bulk information it gives us (compared to the 2d slice that different polished microscopy does). The first thing almost every person I’ve spoken to is interested in is porosity. Can we see the pores?

Avizo has a number of algorithms dedicated to this. There is an example in the tutorials for a foam sample for separating the voids and the solids in a binary separation, that then lets you do statistical work with the pore sizes, diameters, and distortions. There’s another example for working with sand/glass balls sintered loosely together. Basically, situations with clearly defined positive or negative near-spherical pores. In my terracotta, we don’t have that.

A bunch of snipped images from the Azivo tutorial documents

What I have, terracotta, may have started off like that when it was dry clay, but during firing, bits melt, shrink and offgass, clay sheets collapse, silica flows out as liquid glass in the pores (it does NOT float like water, by the way), and the gaps between particles partly collapse, and partly reshape as hot gases push out to the surface. The pores are smaller, irregular, the number of voxels that are mixed density ‘edge’ voxels are higher. To investigate this, I made use of a wizard in Azio, called ‘Auto-Skeleton’.

The steps for this are

1) I cut a subvolume from the main volume of the scan. This subvolume is about 2mm by 2mm by 6mm. This massively speeds up processing time, and means I’m not accidently including the air around the material as a very large pore. Azivo subvolumes are always cuboid, and making the tedious manual alignment a bit easier is a major reason why I rotated the block at the very start of this post. I save it as a separate view (.am file) so I can easily recreate it.

2) Since I’ve gone to the effort to have a volume that’s ‘solid’ terracotta, I generate a voxel histogram for it. It’s useful data for later, and the steps in the histogram may indicate different materials and their proportions (note the vertical scale is a log scale). I save it as a csv file to read into other software.

3) I feed the subvolume into the Auto-Skeleton. This component does a lot under the hood, but spits out a 3d line trace of where pores might be and where they connect. This gives us data on tortusity, connectivity, and if pores running to the surface are different to pores running parrallel to it.

There’s a few settings to fiddle with here. I need ‘black on white’ as I’m tracing pores (not white on black for tracing dense fibres, for example). I need to set the density threshold below which a voxel counts as a ‘pore’ – I slightly arbitrarily chose 90 for all the samples based on 2 standard deviations away from the peak for the histogram (see below). I leave the other settings on their defaults. The result is visible in the picture above, with the relatively few pores typical for terracotta. Thin lines tracing pores, and spheres marking node connections between pores. It’s possible to make this view more meaningful, with line diameter matching pore diameter, colour maps ect, but I just export the data and the graph statistics to xml.

Graph showing count of voxels by density (in the subvolume), and the cutoff I’m using.

My understanding of the default settings and underlying script is that once the voxels are split into ‘pore’ and ‘not-pore’ based on threshold, the distance map for each pore voxel to the nearest non-pore voxel is calculated for each voxel, with the pore centre and diameter set by the max values in the distance map. Then under the hood, smaller, nosier pore segments are thinned out, and the skeleton script figures out what connects to what. The parts are all available if you want to build it yourself for detailed control. The approach struggles with air films around large grains, giving a net of linked cylindrical pores rather than the true flattish air gap. As it it based on voxel thresholds, it also can’t really find pores that are smaller in diameter than a voxel, which is a bit of a limitation. Still, it would not be possible to recover the large pore 3d geometry any other way, and it’s interesting for that alone.

Leave a comment

Design a site like this with WordPress.com
Get started