Why do PEGS energy cutoffs affect EGS execution times?


Making one global data set for all EGS4 runs is not a good idea for two reasons:

Accuracy.

PEGS/EGS employs a maximum allowable number of points to fit cross section data, in the vicinity of 150--200 points. If you are near or below the K-edge, a discontinuity, the fitting accuracy criterion can not be satisfied. The larger the dynamic range the worse the approximation.


However, you have to work pretty hard to find and example that makes a big difference! The high energy part of the cross sections, where everything is asymptotic and logarithmic, spacing does not matter very much, but at low energy differences can be observed. (Try using EXAMIN to plot cross sections in the vicinity of the K-shell.) So, make a new PEGS4 data file for each problem with as small a dynamic range as necessary. That's the moral.

Efficiency.

Unless you specifically require the sub-ECUT modeling (for energy straggling or some other purpose) you should be aware that the Moller cross section goes like 1/e^2. So, if, for example, you are using an ECUT of 0.531 and an AE of 0.516, you are not creating a "few" secondaries, you are creating a whole lot. Because EGS is a general-purpose code, the full cross section modeling is invoked, including energy selection and angular distributions of the secondaries below ECUT, because the user may have some use for this information (angular distribution of discarded particles, for example).

So, be aware of the CPU-hungry sub-ECUT modeling, and only employ it when you really need it.

FAQ answer provided by Alex F. Bielajew


return to main EGS web page    return to main EGS FAQ
Webmasters

last updated 10/04/01