[elastix] A question

Stefan Klein s.klein at erasmusmc.nl
Thu Mar 18 11:52:00 CET 2010


Hi Andriy,

Some additional clarifications about local MI:

> The RandomSampleRegion option seems to be only implemented for the
> RandomCoordinate sampler.

That's correct.

> Note that you can use more optimizers than just StandardGradientDescend
> with this specific way of sampling.

The StandardGradientDescent and the AdaptiveStochasticGradientDescent 
are most appropriate in combination with this sampling strategy. With 
StandardGradientDescent you have to specify three parameters to define 
the step size:

SP_a
SP_A
SP_alpha

For AdaptiveStochasticGradientDescent you only have to specify SP_A. I 
recommend setting it to 100 if you're using the random region sampling 
strategy. But feel free to experiment with it.

> 
>> Then my questions are:
>>
>> 1) Can I use not just 1 single voxel (and its neighborhood) 
>> per iterations, but use a sum of e.g. 100 of neighborhoods 
>> simultaneously?
>> or even all the image voxels (and its neighborhoods)?
> 
> Nope. Currently, this sampler simply samples from one (1) region.

It is possible by using the MultiMetric registration options. See 
section 6.1.1 of the manual (you would have to create multiple mutual 
information metrics, each with their own sampler. It's a bit of a hack 
though, and it would be practical only up to 10 neighbourhoods 
simultaneously.

> 
> You can however easily create a new or modify the afforementioned
> sampler, which implements your wishes. It only requires modification of
> the GenerateData() function of this class. And some changes in 
> 
> 	elastix\src\Components\ImageSamplers\RandomCoordinate
> 
> We can give you some pointers if you are interested.

It would be better to write a new similarity metric, that computes the 
sum of mutual informations over local regions.

> 
>> 2) When the random voxel and its neighborhood is selected can 
>> I use simply all the voxel within?
> 
> Also a nope. And again it is quite easy to change the code.

Indeed, that's currently not possible, but would be easy to implement.

Are you aiming at using the localized mutual information in combination 
with a deterministic optimizer (conjugate gradient for example)?

Kind regards,
Stefan.




More information about the Elastix mailing list