Skip to main content

Why Monte Carlo Matters for Photon Beam Radiotherapy

Dose calculation accuracy of ±2% is a non-negotiable requirement in modern radiotherapy. ICRU Report 50 mandates that tumor dose must stay within -5% to +7% of the prescribed dose, and detailed uncertainty analyses show this demands 3% accuracy in dose calculation — for certain tumors, the margin tightens to 3.5%. Conventional convolution and superposition algorithms don’t always meet this target in regions with tissue heterogeneities.

Series overview: for the full roadmap and related articles, return to the complete guide on Monte Carlo in radiotherapy.

Linear accelerator (LINAC) in a radiotherapy treatment room used for photon beam treatment and Monte Carlo dose planning
Photo: Jo McNamara / Pexels

The Monte Carlo (MC) method starts from first principles and tracks individual particle histories, including secondary particle transport. In practice, this makes MC the most accurate algorithm for simulating dose distributions in complex scenarios like IMRT, VMAT, and tissue heterogeneity situations. With MR-LINACs, MC has become not just preferable but mandatory — the magnetic field’s influence on dose distribution makes Monte Carlo the only viable calculation method (Hissoiny et al., 2011; Kubota et al., 2020).

For a comprehensive overview of Monte Carlo techniques in radiotherapy, check our complete guide to Monte Carlo in Radiotherapy.

Requirements for a Clinical Monte Carlo Planning System

A clinical Monte Carlo treatment planning (MCTP) system for photon beams goes well beyond a dose calculation algorithm coupled with a beam model. The system needs beam setup capabilities, dose display, and dosimetric evaluation tools — features that commercial TPS already provide for conventional algorithms but are often missing from research packages.

The beam model is the foundation. Inaccurate beam characterization propagates errors throughout the entire dose calculation pipeline. MC-based beam models can use full treatment head simulation, phase space (phsp) files, histogram-based models, or hybrid approaches. Each involves trade-offs between accuracy and computational speed.

Patient Model and CT Conversion

The anatomical patient representation directly impacts dosimetric accuracy. MC algorithms require interaction data (cross sections) derived from tissue composition, not just electron density as in conventional algorithms. Converting Hounsfield values to material composition involves segmentation into bins — more bins mean more accurate representation.

CT image artifacts lead to inaccurate patient representations and consequently incorrect dose distributions. Grid resampling, where the calculation voxel size differs from the CT, introduces additional errors. A study by Volken et al. (2008) showed that integral conservative Hermitian curve interpolation significantly improves accuracy compared with linear or cubic interpolation.

Dual-energy CT scanners hold potential for improving tissue identification relevant to MC, though the benefit is more pronounced for protons and kV radiotherapy than for MV photon beams.

Dose Calculation and Statistical Uncertainty

The MC algorithm can interface with the beam model through phsp files or direct in-memory particle passing. The latter is faster and eliminates the need to store large phsp files. A unique MC advantage is the ability to calculate dose for dynamic situations — patient motion, IMRT, VMAT — and even non-coplanar techniques with dynamic collimator and couch rotations.

MC calculation time doesn’t scale linearly with the number of beams when only statistical uncertainty in the target is considered. But it may increase if acceptable uncertainty is also required in OARs, which receive lower particle fluence.

Commissioning and Validation for Photon Monte Carlo

Commissioning and validation process of Monte Carlo treatment planning systems for photon beams in radiotherapy
Photo: Jo McNamara / Pexels

Commercial TPS vendors provide commissioning recommendations, but for MC implementations, the minimum sufficient set of comparisons isn’t fully established yet. Choosing tolerance and acceptance criteria — typically 2% or 2 mm — requires care: if these criteria apply to patient dose calculation, the beam model error estimate must not exceed 1% or 1 mm.

The validation process typically includes:

Measurement Type Description Notes
Depth dose curves Relative and absolute, open and modified beams In water or equivalent phantoms
Lateral profiles Different field sizes, including off-axis Static and dynamic fields (IMRT, VMAT)
Output factors Absolute dose calibration (cGy/MU) Multi-point method is more robust than single point
Inhomogeneous phantoms Water-bone-water or equivalents Validates transport in non-water materials
Clinical plans MC vs conventional algorithm comparison Simple and complex cases, water and CT-based

Source: Monte Carlo Techniques in Radiation Therapy (2nd ed., CRC Press, 2022)

Comparisons at shallow depths are especially useful because they are highly sensitive to beam model parameters. In-air measurements also help evaluate beam model performance by reducing scatter impact. A critical point: since MC calculations carry statistical uncertainties, single-point dose comparisons are inappropriate for absolute calibration. Multi-point methods are far more robust.

MLC validation deserves special attention. A study by Reynaert et al. (2005) found 10% DVH differences for the optical chiasm between Peregrine and an in-house MC system, attributed to inaccurate Elekta MLC modeling. Transmission, interleaf leakage, and shaped fields must be rigorously validated.

For more on photon beam modeling with Monte Carlo, read our dedicated article on Monte Carlo external photon beam modelling.

MCTP Systems: Research and Commercial

Multiple research institutions have developed MCTP systems over the years. The following table summarizes the key systems:

System Institution MC Code Beam Model
RTMCNP UCLA MCNP4A User-friendly MCNP interface
EGS4-MCTP Memorial Sloan Kettering EGS4 Dual-source (primary + scatter)
MCDOSE Stanford/Fox Chase EGS4 phsp or multiple source models
RT_DPM Univ. Michigan DPM (BEAMnrc phsp) Dose Planning Method
XVMC-based Univ. Tubingen XVMC Virtual fluence model + MC optimization
MMCTP McGill University BEAMnrc + XVMC DICOM-RT, contouring, visualization
SMCP Inselspital/Univ. Bern EGSnrc or VMC++ Registered in Eclipse (Varian)
PRIMO UPC/Essen PENELOPE/DPM GUI + DICOM-RT import
CARMEN Univ. Sevilla EGSnrc MATLAB, inverse optimization

Source: Monte Carlo Techniques in Radiation Therapy (2nd ed., CRC Press, 2022)

On the commercial side, the main systems include:

System Vendor MC Code Key Features
Peregrine NOMOS/Corvus Custom 4-source model, correlated histograms (discontinued)
Monaco Elekta (CMS) XVMC Virtual fluence model, 11 parameters, MLC transmission filters
iPlan MC Brainlab Custom 93 in-air + 97 water measurements, speed vs accuracy MLC modes
ISOgray DOSIsoft PENELOPE/PENFAST Selective particle tracking, skin/nonskin areas
Precision MC Accuray Custom CyberKnife, single-source target, internal reference commissioning
RayStation MC RaySearch GPU-based in-house 11s dual-arc prostate (3mm3, GTX 1080Ti), 1% uncertainty, Woodcock tracking

Source: Monte Carlo Techniques in Radiation Therapy (2nd ed., CRC Press, 2022)

RayStation MC deserves a highlight: released in 2019 (version 8b), it uses GPU with Woodcock tracking, achieving calculations in 11 seconds for a dual-arc prostate case with 3 mm3 voxels and 1% statistical uncertainty. The clear trend is that GPU implementations will make MC clinically practical for routine use.

Clinical Applications and Practical Examples

Radiotherapy treatment room with equipment for clinical Monte Carlo photon beam applications
Photo: Jo McNamara / Pexels

Noise in Dose Distributions

Unlike deterministic algorithms, MC produces dose distributions with statistical uncertainty. This affects isodose lines, DVHs, dose indices, and cost function convergence in optimization. The uncertainty is inversely proportional to the square root of the number of histories — halving it requires quadrupling the simulated particles.

In practice, 2% uncertainty per beam yields reasonable precision in the target volume when three or more beams are used. But point values like $D_{max}$ and $D_{min}$ are highly critical — volumetric quantities like $D_{median}$ or $D_{mean}$ are more reliable metrics with MC. For OARs receiving lower fluence, uncertainty can be much higher than in the PTV.

Denoising techniques — including recent deep learning approaches — offer potential for noise reduction but must preserve real dose gradients and are better suited for initial planning than final dose calculation.

Lungs: Where MC Makes the Biggest Difference

Lung cases are where differences between MC and conventional algorithms become most evident. Pencil beam-based algorithms can produce errors of up to 30% compared with MC in simple lung heterogeneities (Fogliata et al., 2007). For more advanced algorithms, this difference drops to about 8%. Wang et al. (2002a) showed differences exceeding 10% between MC and equivalent path length correction algorithms.

A relevant finding: using 6 MV photons instead of 15 MV is advantageous in lung due to reduced lateral electron range at lower energies (Wang et al., 2002b; Madani et al., 2007). When highly accurate algorithms are available, energy selection depends on clinical endpoint prioritization.

See also our article on Dynamic Beam Delivery and 4D Monte Carlo to understand how MC handles IMRT and VMAT in dynamic scenarios.

Monte Carlo as a QA Tool

Beyond planning, MC serves as an independent quality assurance tool. The ability to recalculate dose distributions from first principles makes MC a robust verification for plans calculated with other algorithms — particularly useful for monitor unit (MU) verification in complex techniques like IMRT.

To delve deeper into the theoretical foundations, read the article on Monte Carlo fundamentals in radiotherapy.

Conclusion: Monte Carlo in Routine Photon Beam Practice

Monte Carlo is no longer restricted to research laboratories. With GPU implementations delivering calculations in seconds, structured commissioning, and validation against experimental measurements, MC is ready for clinical routine — and is indispensable for MR-LINACs. The main remaining barrier is the source model: each user must commission their accelerator so the MC algorithm meets accuracy requirements (typically 2% or 2 mm) before clinical patient use.

Volumetric quantities like $D_{mean}$ are preferable to point values for dose prescription and evaluation with MC. And for lung cases, the clinical impact of MC is unequivocal — ignoring real particle transport can significantly compromise treatment quality.

Leave a Reply