AAPM RT-MAC Challenge

Organized by MarkGooding - Current server time: May 21, 2019, 11:41 a.m. UTC

Current

Training
April 15, 2019, midnight UTC

Next

Pre-AAPM
June 1, 2019, midnight UTC

Overview

MRI is popular in radiation oncology because of its excellent imaging quality of soft tissue and tumor. With the advent of MR-Linac and MR-guided radiation therapy, there is a trend toward a MR-based radiation treatment planning. Contouring is an important task in modern radiation treatment planning and frequently introduces uncertainties in radiation therapy due to observer variabilities. Auto-segmentation has been demonstrated as an effective approach to reduce this uncertainty. The overall objective of this grand challenge is to provide a platform for comparison of various auto-segmentation algorithms when they are used to delineate organs at risk (OARs) or tumors from MR images for head and neck patients for radiation treatment planning. The results will provide an indication of the performances achieved by various auto-segmentation algorithms and can be used to guide the selection of these algorithms for clinic use if desirable. The challenge is made up of multiple phases:

Phase 1 will conducted via this website in advance of the AAPM meeting. 12 test images will be provided and results will be submitted online. An individual from each of the two top-performing teams will receive a waiver of the meeting registration fee in order to present their methods during the challenge symposium at AAPM.

Phase 2 will be conducted live at the AAPM. A further 10 test images will be provided for evaluation, and participants will have 2 hours to generate results. Participants need not have participated in Phase 1 to be part of Phase 2.

Symposium Following Phase 2 a symposium will be held at which the results of both previous phases will be presented.

Details of the AAPM symposium

Phase 3 will be an on-going benchmarking conducted via this website. Both test sets from phase 2 will be included within the on-going assessment.

The Prize

An individual from each of the two top-performing teams in phase 1 will receive a waiver of the meeting registration fee in order to present their methods during the challenge symposium at AAPM.

Get Started

  1. Register here to get access
  2. Download the data after approval
  3. Submit your results
  4. Win the Challenge

Important Dates

  • May 03, 2019 Release of training data
  • May 03, 2019 Release of off-site test data
  • May 29, 2019 AAPM Early-bird registration ends (refunded if you win!)
  • June 01, 2019 Off-site test result submission opens
  • June 21, 2019 Off-site test result submission close
  • June 28, 2019 Off-site test results released
  • July 14-18, 2019 AAPM Annual Meeting
    • TBC Live challenge at AAPM. New test data released
    • TBC Segmentation symposium at AAPM. Results announced
  • TBC Online leader board available

Contouring Guidelines

Contouring guidelines will be document here in the near future.

Evaluation Criteria

Auto-segmented contours will be compared against the manual contours for all test datasets using the following evaluation metrics as implemented in Plastimatch. RTSS will be voxelised to CT resolution for all calculations. Evaluation will be performed in 3D. To prevent uncertainty with the extent to which the Spinal cord and Esophagus should be contoured, submitted contours will be cropped to the extent of the test data. Therefore, you will not be penalised for contouring too great an extent of the structure in the inferior-superior direction, but will for under-segmentation.

Dice Coefficient

This is a measure of relative overlap, where 1 represents perfect agreement and 0 represents no overlap.

Dice calculation

where X and Y are the ground truth and test regions.

Mean surface distance

The directed average Hausdorff measure is the average distance of a point in X to its closest point in Y. That is:

\[ \vec{d}_{H,\mathrm{avg}}(X,Y) = \frac{1}{|X|} \sum_{x \in X} \min_{y \in Y} d (x,y) \]

The (undirected) average Hausdorff measure is the average of the two directed average Hausdorff measures:

\[ d_{H,\mathrm{avg}}(X,Y) = \frac{\vec{d}_{H,\mathrm{avg}}(X,Y) + \vec{d}_{H,\mathrm{avg}}(Y,X)}{2} \]

Hausdorff distance (95% Hausdorff distance)

The directed percent Hausdorff measure, for a percentile r, is the r th percentile distance over all distances from points in X to their closest point in Y. For example, the directed 95% Hausdorff distance is the point in X with distance to its closest point in Y is greater or equal to exactly 95% of the other points in X. In mathematical terms, denoting the th percentile as Kr, this is given as:

percentile distance

The (undirected) percent Hausdorff measure is defined again with the mean:

Mean hausdorff

Normalisation of the score

Different organs and measures will have different ranges of scores, therefore it is not possible to simply average them to get an overall score. Therefore to be able to normalise the scores with respect to expected values 3 cases have been contoured by multiple observers. The mean score of these observers will be used as a reference score against which submitted contours will be compared. For any organ/measure a perfect value(Dice = 1, AD/HD =0) will be scored 100. A value equivalent to the average inter-observer reference will be given a score of 50. A linear scale will be used to interpolate between these values, and extrapolate beyond them, such that a score of 0 will be given to any result below the reference by more than the perfect score is above the reference.

Score = max ( 50 + ( (T-R)/(P-R) * 50 ), 0 )

Where T is the test contour measure, P is the perfect measure, and R is the reference measure for that organ/measure.

For example, given a reference Dice of 0.85; a test contour with a Dice of 0.9 against the "ground truth" will score 66.6, where as a test contour with a Dice of 0.72 against the "ground truth" would score 7.

The winners

The normalized scores for all organs, measure and test cases will be averaged (mean) to give a final score. The winner will be the team with the highest final score.

Submission Guidelines

Submitted contours should include all of the the structures found in the training data, named in the same way as the training data. i.e. each case should contain:

  • Parotid_L
  • Parotid_R
  • Submand_L
  • Submand_R
  • LN_II_L
  • LN_II_R
  • LN_III_L
  • LN_III_R

The results should be submitted as a single DICOM RTSTRUCT file per test case. Each file should be named according to the patient ID for the case - i.e. LCTSC-Test-S1-101.dcm

Structure files for all 10 test cases should encapsulated into a single zip file. There must be no folder structure within the zip file. No specific naming is required for the zip file. This zip file can then be uploaded via the participate page.

If you have file naming errors, files missing, structure naming errors the website should report these to you, but may result in that submission being scored zero.

 

Conversion to DICOM RTSTRUCT using open source software

To convert your local format to DICOM-RT, you may use 3D Slicer or CERR.

Below is the instruction of using 3D Slicer to perform the conversion:

1) Load all images. At time of load, click "Labelmap" checkbox for each structure
2) Go to Segmentation module
3) For "Active segmentation", choose "Create new segmentation"
4) For each structure, repeat:
    4a) In "Export/Import segments", choose structure, and import as labelmap
5) Click on one of the segments, and then click "Edit selected"
6) Set the "Master volume" to your CT

7) Go to Data module.
8) On background, right click choose "Create new subject"
9) On subject, right click choose "Create child study"
10) Drag the CT and segmentation node onto the child study
11) Right click on study, choose "Export to DICOM"

12) Enjoy your newly created DICOM-RT file

Live Competition Guidelines

Please submit your off-site test results to enter the live competition. The live competition of this grand challenge will be held in conjunction with the 2019 AAPM annual meeting, which will be held in San Antonio, Texas, USA. Meeting information is available here. Please register for the meeting for the live competition.

Details of the live challenge will appear here once they are finalised.

Terms and Conditions

  • Anonymous participation is not allowed
  • By entering you give the organizers to publish the results of this study
  • Results will not be linked to participants in publications without express permission of the participant to do so
  • Entry by commercial entities is permitted, but should be disclosed
  • Entries from groups associated with the organisers is permitted, but must be conducted independently and disclosed as such.
  • Team participation is allowed, but the team members should have the same affiliation and each team should have no more than 3 persons.

Organizers and Major Contributors

  • Greg Sharp (Massachusetts General Hospital)
  • Jinzhong Yang (MD Anderson Cancer Center)
  • Mark Gooding (Mirada Medical)
  • Carlos Cardenas (MD Anderson Cancer Center)
  • Abdallah Mohamed (MD Anderson Cancer Center)
  • Harini Veeraraghavan (Memorial Sloan Kettering Cancer Center)
  • Jayashree Kalpathy-Cramer (Harvard University)
  • Artem Mamonov (Harvard University)
  • Andrew Beers (Harvard University)

Sponsors

Training

Start: April 15, 2019, midnight

Description: In this phase you can download the training data and offline test data. Do not try submit any results yet. The offline test phase begins 1 June.

Pre-AAPM

Start: June 1, 2019, midnight

AAPM Live Challenge

Start: July 15, 2019, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In