Skip to main content

Patient Dose Calculation with Monte Carlo

Accurate patient dose calculation demands algorithms that faithfully reproduce radiation interaction physics in heterogeneous tissues. Monte Carlo (MC) stands as the gold standard for this task, with codes such as EGSnrc, MCNP, GEANT4, and PENELOPE leading both clinical practice and research. All of them rely on condensed history techniques developed by Berger (1963), where thousands of electron interactions are grouped into a single transport step — an elegant solution to handle the tiny energy loss per individual electromagnetic interaction.

Series overview: for the full roadmap and related articles, return to the complete guide on Monte Carlo in radiotherapy.

Radiation therapy equipment used in Monte Carlo patient dose calculation for treatment planning
Photo: Jo McNamara / Pexels

In class I condensed history (such as MCNP, inheriting the ETRAN algorithm), all collisions are grouped and secondary particles above a threshold energy are generated after the step. In class II — used by EGSnrc, GEANT4, and PENELOPE — hard collisions are tracked individually while soft ones are grouped. This distinction matters in practice: step-size artifacts and boundary-crossing issues are now well understood (Kawrakow and Bielajew, 1998; Kawrakow, 2000) and resolved in current code versions.

For a comprehensive overview, check our complete guide to Monte Carlo techniques in radiotherapy.

Statistical Uncertainties in MC Dose Calculation

Every MC simulation introduces unavoidable statistical noise in the calculated dose. Dose fluctuations around the mean value in each voxel directly affect isodose curves and dose-volume histograms (DVHs). The calculation efficiency combines estimated variance with the required CPU time:

$$\varepsilon = \frac{1}{\sigma^2 T}$$

Where:

  • epsilon = efficiency (figure of merit) of the MC simulation
  • sigma squared = estimated variance on the quantity of interest
  • T = CPU time (seconds)

Improving efficiency means either reducing the variance for a given time T, or decreasing T while keeping variance constant. Variance reduction techniques (VRTs) address the first path, using physics and mathematical tricks to accelerate convergence without changing the number of simulated histories. For more on VRTs, see our article on Monte Carlo fundamentals in radiotherapy.

Scoring Geometries and Uncertainty Methods

Treatment room with linear accelerator for Monte Carlo dose calculation in radiotherapy
Photo: Jo McNamara / Pexels

The geometry where dose is scored influences both accuracy and calculation speed. The PEREGRINE dose engine uses dosels — overlapping spheres independent of the material transport mesh — and terminates the simulation when the standard deviation in the highest-dose dosel reaches a user-specified threshold. The MMC (Macro Monte Carlo) engine uses kugels: macroscopic spheres of various materials with pre-calculated transport histories stored in lookup tables, dramatically accelerating electron transport.

Segmented organ scoring offers yet another approach: combining many voxels into a large volume allows rapid and accurate organ dose prediction, though at the cost of spatial resolution. The NCAT phantom — based on the Visible Human Dataset with B-spline surfaces — simulates cardiac and respiratory motion, making it ideal for controlled 4D studies.

Voxel size directly impacts calculation time: Cygler et al. (2004) showed that the number of histories per unit area is linearly proportional to scoring voxel mass.

Latent Variance

Total statistical uncertainty in dose calculation has two sources: the accelerator head simulation and patient/phantom dose fluctuations. Sempau et al. (2001) coined the term latent variance to describe the uncertainty tied to phase space statistical fluctuations. When phase space particles are reused assuming independence, the uncertainty approaches a finite latent variance value regardless of reuse count.

Batch vs. History-by-History Method

In the batch method, uncertainty is estimated by grouping histories into N batches (typically 10):

$$s_X = \sqrt{\frac{\sum_{i=1}^{N}(X_i – \bar{X})^2}{N(N-1)}}$$

Walters et al. (2002) identified three problems: uncertainty fluctuations with few batches, ignored correlations between incident particles, and an extra dimension in stored scoring data.

The history-by-history method, implemented following Salvat et al. (2009), resolves these limitations. Uncertainty is calculated directly from accumulators maintained during simulation, without storing data in separate batches.

Denoising and Smoothing of MC Distributions

MC simulations with few histories generate statistical noise that impairs clinical evaluation. Rather than simply increasing the number of histories — which is not always practical — denoising algorithms can reduce noise without compromising accuracy. Kawrakow (2002) established five benchmark criteria for evaluating smoothing algorithms: visual isodose inspection, DVH differences, maximum dose difference, RMSD, and the x%/y mm test (current recommendation: 2-3%/2 mm).

Method Author(s) Speedup Factor Key Features
DVH Denoising (deconvolution) Sempau and Bielajew (2000) n/a Calculated DVH treated as true DVH convolved with noise
DVH Denoising (restoration) Jiang et al. (2000) n/a Deblurring function with least-square minimization
3D Digital Filters (Deasy) Deasy (2000) n/a First to propose denoising of 3D dose distributions
Wavelet Threshold Deasy et al. (2002) 2x or more Wavelet coefficients below threshold set to zero
3D Adaptive Savitzky-Golay Kawrakow (2002) 2-20x Adaptive window based on local uncertainty
3D Anisotropic Diffusion Miao et al. (2003) up to 20x Preserves dose gradients, 2-5x noise reduction
IRON Fippel and Nusslin (2003) 2-10x Minimizes second-order partial derivatives
Content Adaptive Median Hybrid El Naqa et al. (2005) n/a Median at edges, mean in homogeneous regions

Source: Monte Carlo Techniques in Radiation Therapy (2nd ed., CRC Press, 2022)

Kawrakow’s 3D Savitzky-Golay filter stands out in practice: its smoothing window adapts to local statistical uncertainty, reducing the required number of histories by factors of 2 to 20. This makes it particularly valuable during the iterative phase of treatment planning.

CT to Tissue Conversion in MC Dose Calculation

Computer simulation of Monte Carlo applied to dosimetry and medical physics in radiotherapy
Photo: Sahil Singh / Pexels

Converting CT Hounsfield units (HU) into material composition and density is one of the most critical steps in MC dose calculation. Early algorithms used six or fewer materials (air, lung, fat, water, muscle, bone). du Plessis et al. (1998) evaluated 16 human tissues and concluded that seven subsets were sufficient for 1% dose accuracy with megavoltage photon beams — resulting in 57 tissue types, including 21 cortical bone types (1,100-3,000 HU) and 31 lung tissues (20-950 HU).

The breakthrough came with stoichiometric conversion schemes, proposed by Schneider et al. (1996) originally for proton algorithms. The core idea: scan materials of known composition for calibration, fit measured HU values to a theoretical equation relating HU, density, Z, and atomic weight, then use the fitted parameters to calculate HU values for real tissues.

Vanderstraeten et al. (2007) generalized the scheme to 14 dosimetrically equivalent subsets (10 bone), reducing differences of up to 5% at the tissue-bone interface that existed with the 5-bin scheme. The calibration was validated across nine European radiotherapy departments with different CT scanners.

Dual-Energy CT: Improved Conversion

Dual-energy CT imaging scans the object at two different tube voltages to better estimate effective atomic number Z and relative electron density. Bazalova et al. (2008) reported mean extraction errors of 1.8% and 2.8%, respectively. For the 250 kVp beam, dose errors reached 17% due to tissue misassignment; for 18 MeV electrons and 18 MV photons, errors were 6% and 3%; for 6 MV, differences stayed below 1%.

Deformable Image Registration and 4D Dose Calculation

Deformable image registration connects respiratory phases, contour propagation, and 4D optimization. Keall et al. (2004) performed one of the first 4D MC calculations: dose was calculated separately on 8 3D CTs (one per respiratory phase) and mapped back to the reference phase using deformable registration. For a comprehensive look at 4D dynamics, see our article on Dynamic Beam Delivery and 4D Monte Carlo.

Flampouri et al. (2005) compared 3D and 4D planning for six lung IMRT patients and concluded that conventional planning was sufficient for tumor motion less than 12 mm. CT reconstruction artifacts had a larger impact than motion itself. Three respiratory phases allow 3% error relative to the 10-phase composite; five phases reduce this to 0.5%.

Seco et al. (2008) introduced the omega index to analyze dose differences as a function of tissue density. Differences of 3-5 Gy between inhalation and free-breathing were observed in both low-density (lung) and high-density (bone) regions, showing that 4D MC improves predictions not only within the tumor but across the entire PTV.

Heath et al. (2008) compared three dose warping methods — center of mass tracking, trilinear interpolation, and defDOSXYZ — finding no clinically significant dose differences. When motion was not accounted for, the target volume dose was underestimated by up to 16%.

Inverse Planning with Monte Carlo

IMRT inverse planning calculates the D matrix that converts fluences to dose. Each matrix element can be computed using pencil beam, convolution/superposition, or MC. However, the standard formulation often fails to model delivery constraints and field-size-dependent output factors.

Jeraj and Keall (1999) made the first attempt at 3D MC inverse planning (MCNP + EGS4), generating MC pencil beams. Siebers and Kawrakow (2007) refined this with a hybrid method: initial pencil beam prediction iteratively improved by MC, achieving a 2.5x gain over pure MC optimization.

Nohadani et al. (2009) proposed a broad beam approach for 4D MC-PB: the phase space after the jaws is split into pixels, each generating an MC pencil beam distribution in the patient. When all PBs are combined into an open field, the output factor is that of the standard field. DVH comparisons showed that IMRT plans based on conventional pencil beam may appear superior in PTV coverage, but deviate significantly from the actual deposited dose calculated by MC. Dose calculation errors cannot be neglected.

Considering the clinical use of Monte Carlo for photons and electron beam modeling techniques, it is clear that MC patient dose calculation brings together advances across the entire chain — from source modeling to dynamic delivery — to provide the most realistic dosimetric assessment possible.

Leave a Reply