Matthew A Kupinski
- Professor, Optical Sciences
- Professor, Radiology
- Professor, BIO5 Institute
- Professor, Applied Mathematics - GIDP
- Member of the Graduate Faculty
Biography
Matthew A. Kupinski is a Professor at The University of Arizona with appointments in the College of Optical Sciences, the Department of Medical Imaging, and the program in Applied Mathematics. He performs theoretical research in the field of image science. His recent research emphasis is on quantifying the quality of multimodality and adaptive medical imaging systems using objective, task-based measures of image quality. He has a BS in physics from Trinity University in San Antonio, Texas, and received his PhD in 2000 from the University of Chicago. He is the recipient of the 2007 Mark Tetalman Award given out by the Society of Nuclear Medicine and is a member of the OSA and SPIE. Contact him at College of Optical Sciences, The University of Arizona, 1630 E. University Blvd., Tucson, Arizona 85721; mkupinski@optics.arizona.edu
Degrees
- Ph.D. Medical Physics
- University of Chicago, Chicago, Illinois, USA
- Computerized pattern classification in medical imaging
- B.S. Physics
- Trinity University, San Antonio, Texas, USA
Work Experience
- College of Optical Sciences, University of Arizona (2014 - Ongoing)
- College of Optical Sciences, University of Arizona (2008 - 2014)
- College of Optical Sciences, University of Arizona (2002 - 2008)
- University of Arizona, Tucson, Arizona (2000 - 2002)
- University of Chicago, Chicago, Illinois (1997 - 1998)
- University of Chicago, Chicago, Illinois (1995 - 2000)
- Honeywell (1990 - 1993)
Licensure & Certification
- Q Clearance, Department of Energy (2013)
Interests
Teaching
Probability and statistics, mathematical modeling, mathematical methods, statistical optics, statistical decision theory
Research
Medical imaging, task-based assessment of image quality, statistical decision theory, homeland security, CT imaging, SPECT, PET, MRI, information theory
Courses
2025-26 Courses
-
Dissertation
OPTI 920 (Spring 2026) -
Probability+Stat Optics
OPTI 508 (Spring 2026) -
Dissertation
OPTI 920 (Fall 2025) -
Mathematical Optics Lab
OPTI 512L (Fall 2025) -
Noise In Imaging Systems
OPTI 636 (Fall 2025)
2024-25 Courses
-
Dissertation
OPTI 920 (Spring 2025) -
Probability+Stat Optics
OPTI 508 (Spring 2025) -
Dissertation
OPTI 920 (Fall 2024) -
Mathematical Optics Lab
OPTI 512L (Fall 2024) -
Noise In Imaging Systems
OPTI 636 (Fall 2024)
2023-24 Courses
-
Master's Report
OPTI 909 (Summer I 2024) -
Dissertation
OPTI 920 (Spring 2024) -
Probability+Stat Optics
OPTI 508 (Spring 2024) -
Dissertation
OPTI 920 (Fall 2023) -
Mathematical Optics Lab
OPTI 512L (Fall 2023) -
Noise In Imaging Systems
OPTI 636 (Fall 2023)
2022-23 Courses
-
Dissertation
OPTI 920 (Spring 2023) -
Probability+Stat Optics
OPTI 508 (Spring 2023) -
Thesis
OPTI 910 (Spring 2023) -
Dissertation
MATH 920 (Fall 2022) -
Dissertation
OPTI 920 (Fall 2022) -
Mathematical Optics Lab
OPTI 512L (Fall 2022) -
Noise In Imaging Systems
OPTI 636 (Fall 2022) -
Research
MATH 900 (Fall 2022) -
Thesis
OPTI 910 (Fall 2022)
2021-22 Courses
-
Dissertation
OPTI 920 (Spring 2022) -
Probability+Stat Optics
OPTI 508 (Spring 2022) -
Thesis
OPTI 910 (Spring 2022) -
Dissertation
OPTI 920 (Fall 2021) -
Mathematical Optics Lab
OPTI 512L (Fall 2021) -
Noise In Imaging Systems
OPTI 636 (Fall 2021) -
Thesis
OPTI 910 (Fall 2021)
2020-21 Courses
-
Dissertation
MATH 920 (Spring 2021) -
Dissertation
OPTI 920 (Spring 2021) -
Probability+Stat Optics
OPTI 508 (Spring 2021) -
Thesis
OPTI 910 (Spring 2021) -
Directed Graduate Research
OPTI 792 (Fall 2020) -
Dissertation
MATH 920 (Fall 2020) -
Dissertation
OPTI 920 (Fall 2020) -
Mathematical Optics Lab
OPTI 512L (Fall 2020) -
Noise In Imaging Systems
OPTI 636 (Fall 2020)
2019-20 Courses
-
Dissertation
MATH 920 (Spring 2020) -
Dissertation
OPTI 920 (Spring 2020) -
Probability+Stat Optics
OPTI 508 (Spring 2020) -
Thesis
OPTI 910 (Spring 2020) -
Dissertation
OPTI 920 (Fall 2019) -
Mathematical Optics Lab
OPTI 512L (Fall 2019) -
Noise In Imaging Systems
OPTI 636 (Fall 2019) -
Thesis
OPTI 910 (Fall 2019)
2018-19 Courses
-
Dissertation
OPTI 920 (Spring 2019) -
Probability+Stat Optics
OPTI 508 (Spring 2019) -
Thesis
OPTI 910 (Spring 2019) -
Mathematical Optics Lab
OPTI 512L (Fall 2018) -
Noise In Imaging Systems
OPTI 636 (Fall 2018)
2017-18 Courses
-
Directed Graduate Research
OPTI 792 (Spring 2018) -
Dissertation
OPTI 920 (Spring 2018) -
Probability+Stat Optics
OPTI 508 (Spring 2018) -
Dissertation
OPTI 920 (Fall 2017)
2016-17 Courses
-
Dissertation
OPTI 920 (Spring 2017) -
Master's Report
OPTI 909 (Spring 2017) -
Probability+Stat Optics
OPTI 508 (Spring 2017) -
Dissertation
OPTI 920 (Fall 2016) -
Independent Study
OPTI 599 (Fall 2016) -
Master's Report
OPTI 909 (Fall 2016) -
Mathematical Optics Lab
OPTI 512L (Fall 2016) -
Noise In Imaging Systems
OPTI 636 (Fall 2016)
2015-16 Courses
-
Dissertation
OPTI 920 (Spring 2016) -
Probability+Stat Optics
OPTI 508 (Spring 2016)
Scholarly Contributions
Books
- Mello-Thoms, C. R., & Kupinski, M. A. (2014). Image Perception, Observer Performance, and Technology Assessment. SPIE.
- Barrett, H. H., & Kupinski, M. A. (2005). Small-Animal SPECT Imaging. Springer Science+ Business Media, Incorporated.
- Kupinski, M. A., & Barrett, H. H. (2005). Small-animal SPECT imaging. Springer.
Chapters
- Kupinski, M. A. (2021). Evaluation and Image Quality in Radiation-Based Medical Imaging. In SPIE Handbook of Medical Imaging. Springer International Publishing. doi:10.1007/978-3-319-93785-4_43More infoIn this chapter, we present methods for assessing image quality in radiation-based medical imaging. Image quality is defined by the ability of an observer to perform a relevant task using the images or data generated by an imaging system. The techniques presented in this chapter may be used to assess the utility of imaging hardware or to fine-tune the processing of image data into reconstructed images. Special attention is paid to nuclear medicine imaging but X-ray based imaging systems are also considered.
- Kupinski, M. A. (2018). Implementation of Observer Models. In SPIE Handbook of Medical Imaging. Cambridge University Press. doi:10.1017/9781108163781.019
- Kupinski, M. A. (2012). Evaluation and Image Quality in Radiation-Based Medical Imaging. In Handbook of Particle Detection and Imaging(pp 1083--1093). Springer.
- Kupinski, M. A., & Clarkson, E. (2005). Objective Assessment of Image Quality. In Small-Animal Spect Imaging(pp 101--114). Springer.
- Park, S., Kupinski, M. A., Clarkson, E., & Barrett, H. H. (2003). Ideal-observer performance under signal and background uncertainty. In Information Processing in Medical Imaging(pp 342--353). Springer.
- Hoppin, J., Kupinski, M., Kastis, G., Clarkson, E., & Barrett, H. H. (2001). Objective comparison of quantitative imaging modalities without the use of a gold standard. In Information Processing in Medical Imaging(pp 12--23). Springer.
- Giger, M. L., Vyborny, C. J., Huo, Z., & Kupinski, M. A. (2000). Computer-Aided Diagnosis in Mammography. In Digital Mammography. International Society for Optics and Photonics. doi:10.1117/3.831079.ch15More infoIn this chapter we discuss rationale and methods for computer-aided diagnosis in mammography. Computer-aided diagnosis (CAD) is a diagnosis made by a clinician who uses the output from a computerized analysis of medical images as a "second opinion" in detecting lesions and in making diagnostic decisions. The final diagnosis is rendered by the clinician, e.g., the radiologist. Computer vision and artificial intelligence techniques are developed and customized to accommodate lesions such as cancer and their radiographic presentations on the normal parenchymal background of the breast. Mammography, x-ray imaging of the breast, is currently the best method for the early detection of breast cancer. Between 10 and 30% of women who have breast cancer and undergo mammography have negative mammograms, however. In approximately two-thirds of these false-negative mammograms, the radiologist failed to detect a cancer that was evident retrospectively. The missed detections may be due to the subtle nature of the radiographic findings (i.e., low conspicuity of the lesion), poor image quality, eye fatigue, or oversight by the radiologists. It has been suggested that double reading (by two radiologists) may increase sensitivity. Thus, one aim of CAD is to increase the efficiency and effectiveness of screening procedures by using a computer system, as a "second reader" (like a "spell checker"), to indicate locations of suspicious abnormalities in mammograms as an aid to the radiologist leaving the final decision regarding the likelihood of the presence of a cancer and patient management to the radiologist. The interpretation of screening mammograms lends itself to CAD since it is a repetitive task involving mostly normal images. Figure 15.1 shows a schematic diagram of a computerized detection method for use in screening mammography. If a suspicious region is detected, the radiologist then decides if the abnormality is likely to be malignant or benign, and what course of action should be recommended (i.e., return to screening, return for short term follow-up, or return for biopsy). Many patients are referred for surgical biopsy on the basis of a radiographically detected mass lesion or cluster of microcalcifications. Although general rules for the differentiation between benign and malignant breast lesions exist, considerable variability occurs in the interpretation of findings by radiologists with current radiographic techniques. On average, only 10-20% of masses referred for surgical breast biopsy are actually malignant. Thus, another aim of CAD is to extract and analyze the characteristics of benign and malignant lesions seen at mammography in an objective manner in order to aid the radiologist. This has the potential for increasing diagnostic accuracy and reducing the numbers of false-positive diagnoses of malignancies, thereby decreasing patient morbidity by reducing the number of surgical biopsies performed and their associated complications. Figure 15.2 shows a schematic diagram of a computerized diagnosis method for use in the mammographie workup of a suspect lesion.
Journals/Publications
- Feng, Y., Kupinski, M. A., Ottensmeyer, M. P., Worstell, W., Tawakol, A., Furenlid, L. R., & Sabet, H. (2025). Analytical methods for system matrix calculation and spatial resolution evaluation of DC-SPECT system. Physics in Medicine and Biology, 70(Issue 14). doi:10.1088/1361-6560/adedf7More infoObjective. To develop and evaluate an analytical method for calculating the system matrix of a dynamic cardiac single photon emission computed tomography (DC-SPECT) system, eliminating the need for computationally intensive Monte Carlo (MC) simulations. Approach. An analytical model was proposed for system matrix generation, incorporating solid angle calculations adapted for square-shaped pinhole collimators. The resulting sensitivity maps were validated against MC simulation results. Image reconstructions using the proposed analytical model were compared to those using an MC-simulated system matrix. Additionally, the overall system performance, including sensitivity and spatial resolution, was assessed using MC simulations. Main results. The analytical sensitivity map showed good agreement with MC-based results. Reconstructed images using the analytical model preserved key features but showed reduced performance compared to MC-based reconstructions. MC simulations of DC-SPECT demonstrated a spatial resolution of 5.5 mm for hot regions and 6.0 mm for cold regions, with an overall system sensitivity of 0.07% over a 15 cm diameter spherical field of view. Significance. The proposed analytical method offers a fast and practical alternative to MC-based system matrix generation, enabling efficient system design and prototyping. While its use in clinical image reconstruction requires further evaluation, the model provides a promising tool for accelerating the development of next-generation DC-SPECT systems.
- Kalluri, K. S., Zeraatkar, N., Auer, B., Pells, S., Pretorius, P. H., Richards, G. R., May, M., Momsen, N., Doty, K., Gonzales, M. R., Fromme, T., Truong, K., Kupinski, M. A., Kuo, P. H., Furenlid, L. R., & King, M. A. (2025). Examination of aperture layout designs for an adaptive-stationary multi-pinhole brain-dedicated SPECT system. Medical Physics, 52(Issue 6). doi:10.1002/mp.17866More infoBackground: Organ specific multi-pinhole (MPH) SPECT imaging could potentially improve the sensitivity/resolution trade-off and image quality (IQ), while facilitating the use of a variety of imaging-agents, thereby addressing diagnostic, quantitative, and research clinical needs. Purpose: Investigate through simulation six different MPH aperture-layout designs, plus variations in projection multiplexing (MUX) and truncation, for a prototype brain-dedicated MPH SPECT system, named AdaptiSPECT-C, to understand tradeoffs for such choices and guide selection of an optimal design for construction of the actual AdaptiSPECT-C system. Methods: The prototype AdaptiSPECT-C system investigated herein employs 25 MPH gamma-camera modules arranged in three rings to image a 21 cm diameter spherical volume-of-interest (VOI). With a focal point (FP) to center of detector distances of 38.7 cm, the pinhole aperture diameters were constrained to provide a calculated spatial resolution of 8 mm at the FP. Variations in the number of pinhole (PH) apertures, FP to aperture distance, PH layout, temporal changes in MUX, and extent-of-truncation of the projection images were investigated. Designs of the aperture layouts were used to create inputs for GATE and analytic simulations of a sphere phantom with uniform Tc-99 m activity filling the VOI, to assess MUX, detector utilization, and uniformity in reconstructed slices. We investigated axial and angular sampling using customized-spherical Defrise and Derenzo phantoms. Finally, we assessed reconstructed IQ and activity quantification in reconstructions of analytic simulations of the XCAT digital anthropomorphic phantom with activity and attenuation distributions mimicking clinical-SPECT brain-perfusion imaging. For each phantom, comparison was also made to imaging with a dual-headed SPECT system with low-energy high-resolution (LEHR) parallel-hole (Vertex high resolution [VXHR]) collimators. Results: Sensitivity at the FP (SENS) for a Tc-99 m source in air calculated relative to a clinical dual-headed SPECT system with VXHR collimators was 2.7x higher for a single aperture with no MUX or truncation, increased to 5.7x for five apertures with limited VOI truncation and MUX, and decreased to 2.5x with 13 apertures with limited MUX. For the spherical tub phantom, limited truncation did not impact uniformity, MUX decreased it, and temporal shuttering of projections helped lessen this impact. Visually, the 6.4 mm rods were generally well differentiated for the single central apertures. For designs with four or more apertures, all the 4.8 mm rods were well differentiated visually. Projection images of the XCAT phantom acquired for an imaging time that would result in the minimum clinically recommended count-level for brain perfusion imaging with parallel-hole collimators, showed low MUX of the brain structures for all of the MPH aperture layout designs. The best reconstructions for the XCAT phantom, both visually and quantitatively, were obtained with the design using 4- or 5-PH-apertures for the aperture-layout design that included MUX and some truncation of imaging. Conclusions: We determined for a prototype brain-dedicated MPH SPECT employing 25 camera modules in three rings with different PH layout designs imaging a 21 cm diameter spherical VOI, that a system with five apertures per module provided the best SENS, and IQ of the XCAT brain phantom, both visually and numerically.
- Feng, Y., Worstell, W., Kupinski, M., Furenlid, L. R., & Sabet, H. (2024). Resolution recovery on list mode MLEM reconstruction for dynamic cardiac SPECT system. Biomedical Physics and Engineering Express, 10(Issue 1). doi:10.1088/2057-1976/ad0f40More infoThe Dynamic Cardiac SPECT (DC-SPECT) system is being developed at the Massachusetts General Hospital, featuring a static cardio focus asymmetrical geometry enabling simultaneous high-resolution and high-sensitivity imaging. Among 14 design iterations of the DC-SPECT with varying number of detector heads, system sensitivity and resolution, the current version under development features 10 mm FWHM geometrical resolution (without resolution recovery) and 0.07% sensitivity at the center of the FOV, this is 1.5× resolution gain and 7× sensitivity gain compared to a conventional dual head gamma camera (0.01% sensitivity and 15-mm resolution). This work presents improvement in imaging resolution by implementing a spatially variant point spread function (SV-PSF) with list mode MLEM reconstruction. A resolution recovery method by PSF deconvolution is validated on list mode MLEM reconstruction for the DC-SPECT. A spatial invariant PSF is included as an additional test to show the influence of the PSF modelling accuracy on reconstructed image quality. We compare the MLEM reconstruction with and without PSF deconvolution; an analytic model is used for the calculation of system response, and the results are compared to the reconstruction with system modelling using Monte Carlo (MC) based methods. Results show that with PSF modelling applied, the quality of the reconstructed image is improved, and the DC-SPECT system can achieve a 4.5 mm central spatial resolution with average 795 counts/Mbq. Both the SV-PSF and the spatial-invariant PSF improve the image quality, and the reconstruction with SV-PSF generates line profiles closer to the ground truth. The results show substantial improvement over the GE Discovery 570c performance (7 mm spatial resolution with an average 460 counts/MBq, 5.8 mm resolution at the FOV center). The impact of PSF deconvolution is significant, improvement of the reconstructed image quality is evident in comparison to MC simulated system matrix with the same sampling size in the simulation.
- Pells, S., Zeraatkar, N., Kalluri, K. S., Moore, S. C., May, M., Furenlid, L. R., Kupinski, M. A., Kuo, P. H., & King, M. A. (2024). Correction of multiplexing artefacts in multi-pinhole SPECT through temporal shuttering, de-multiplexing of projections, and alternating reconstruction. Physics in Medicine and Biology, 69(Issue 12). doi:10.1088/1361-6560/ad4f47More infoObjective. Single-photon emission computed tomography (SPECT) with pinhole collimators can provide high-resolution imaging, but is often limited by low sensitivity. Acquiring projections simultaneously through multiple pinholes affords both high resolution and high sensitivity. However, the overlap of projections from different pinholes on detectors, known as multiplexing, has been shown to cause artefacts which degrade reconstructed images. Approach. Multiplexed projection sets were considered here using an analytic simulation model of AdaptiSPECT-C—a brain-dedicated multi-pinhole SPECT system. AdaptiSPECT-C has fully adaptable aperture shutters, so can acquire projections with a combination of multiplexed and non-multiplexed frames using temporal shuttering. Two strategies for reducing multiplex artefacts were considered: an algorithm to de-multiplex projections, and an alternating reconstruction strategy for projections acquired with a combination of multiplexed and non-multiplexed frames. Geometric and anthropomorphic digital phantoms were used to assess a number of metrics. Main results. Both de-multiplexing strategies showed a significant reduction in image artefacts and improved fidelity, image uniformity, contrast recovery and activity recovery (AR). In all cases, the two de-multiplexing strategies resulted in superior metrics to those from images acquired with only mux-free frames. The de-multiplexing algorithm provided reduced image noise and superior uniformity, whereas the alternating strategy improved contrast and AR. Significance. The use of these de-multiplexing algorithms means that multi-pinhole SPECT systems can acquire projections with more multiplexing without degradation of images.
- Romanchek, G. R., Shoop, G., Kupinski, M. A., Kuo, P. H., King, M., Furenlid, L. R., & Abbaszadeh, S. (2024). Investigation of Quantum Entanglement Information for β+γ Coincidences. Bio-Algorithms and Med-Systems, 20. doi:10.5604/01.3001.0054.9079More infoObjective: We assess the viability of using quantum entanglement (QE) information for improving event classification in a combined PET-Compton Camera (PET-CC) system, particularly in the potential for distinguishing true positron annihilation events from Random events due to prompt gamma contamination for β+ and γ emitting isotopes. Methods: Monte Carlo GATE simulations were performed to evaluate the sensitivity and accuracy of event classification in various scenarios using ground truth data, including standard PET events and Compton Camera interactions. QE-sensitive data subsets were identified and filtered based on either polar scattering angles (θ) or the energy of the initial Compton scatter (EC). The enhancement ratio – ratio of the difference of azimuthal scattering at Δφ = 90° and 0° – and fraction of post-filter Trues were used as metrics. Results: The simulations showed that QE information could assist in resolving energy ambiguities, particularly in cases where prompt gamma emissions complicate event pairing. Filtering based on EC provided a higher enhancement ratio (R ≈ 1.8) compared to θ-based filtering (R ≈ 1.4), indicating better discrimination between True and Random events. The ratio of Trues to Total events passing the EC filter (0.837) greatly improved upon that of the θ-based filter (0.541). Conclusions: Our results suggest that energy-based filtering is more effective in leveraging QE information, but further refinement of filtering algorithms is needed to fully realize its benefits. While QE has the potential to improve event classification in PET-CC systems for a few coincidence cases, further studies are needed to utilize this paradigm in image formation.
- Clarkson, E., Hancock, J., Hinton, G. W., Kupinski, M. A., Latvakoski, H., & Taylor, M. (2022). Performing tomographic reconstructions from a satellite looking toward Earth. Part 2: analysis of image quality. Journal of the Optical Society of America A, 39(7), 1282. doi:10.1364/josaa.449218More infoFor imaging instruments that are in space looking toward the Earth, there are a variety of nuisance signals that can get in the way of performing certain imaging tasks, such as reflections from clouds, reflections from the ground, and emissions from the OH-airglow layer. A method for separating these signals is to perform tomographic reconstructions from the collected data. A lingering struggle for this method is altitude-axis resolution and different methods for helping with it are discussed. An implementation of the maximum likelihood expectation maximization algorithm is given and analyzed.
- Dereniak, E. L., Hagen, N., & Kupinski, M. (2022). Gaussian profile estimation in one dimension: erratum. Applied Optics, 61(16), 4710. doi:10.1364/ao.462196More infoThis erratum corrects errors in Appl. Opt.46, 5374 (2007)APOPAI0003-693510.1364/AO.46.005374.
- Nishikawa, R. M., Deserno, T. M., Madabhushi, A., Krupinski, E. A., Summers, R. M., Hoeschen, C., Mello-Thoms, C., Myers, K. J., Kupinski, M. A., & Siewerdsen, J. H. (2022). Fifty years of SPIE Medical Imaging proceedings papers. Journal of Medical Imaging, 9(Suppl 1).
- Abayazeed, A., Auer, B., Beenhouwer, J. D., Furenlid, L. R., Kalluri, K., King, M. A., Kuo, P. H., Kupinski, M. A., & Zeraatkar, N. (2021). Performance of the AdaptiSPECT-C system for tumor quantification, detection, and localization in123I-CLINDE brain SPECT imaging. The Journal of Nuclear Medicine, 62, 1400-1400.More info1400 Objectives: Due to its higher tumor affinity, the 123I-CLINDE agent has recently come to the forefront as an improved alternative to available agents for cerebral tumor imaging [1]. The AdaptiSPECT-C, a brain dedicated multi-pinhole system with adaptable pinhole configuration, is being designed and constructed by our team [2-4]. In this simulation work, we evaluated through numerical observer and quantification studies the detection/localization capabilities and quantitative accuracies of AdaptiSPECT-C with various pinhole configuration for 123I-CLINDE imaging in comparison to a dual-head parallel-hole system commonly employed for brain imaging. Methods: The AdaptiSPECT-C design simulated consists of 24 square detector modules, each irradiated by up to 5 pinhole apertures. Each of the apertures is adaptable in size (i.e., 1.2, 2.6, and 4 mm) and can be opened or closed independently [3,4]. Through task-based performance studies, we determined detection and localization capabilities using the non-pre whitening matched-filter scanning observer for the 3 pinhole sizes and 3 pinhole combinations: 1 central, 4 oblique, and central+oblique pinholes. Results were compared with that of a dual-head system employing parallel-hole LEHR collimators [5]. The dataset consisted of 198 digital phantoms, 99 without lesion and 99 with spherical tumors, 1 cm in diameter, spatially distributed within one transverse slice of the brain of an XCAT phantom [6]. The tumor size and the tumor-to-background contrast of 1.8 were selected on the basis they correspond to the lowest range of measurable tumor sizes and contrasts seen clinically [7]. Simulations were performed in GATE [8], and projection were reconstructed with a pixel based OSEM reconstruction with 16 subsets [9]. The 9-pinhole combinations were compared for an equal imaging time based on the simulated sensitivity compared to clinical imaging performed with the dual-head parallel system. The reconstructed images with lesion were compared to the ground truth images in terms of lesion activity and contrast recoveries, the NRMSE, and the PSNR for a region of interest of 8 mm in radius placed around each lesion location. The task-based performance studies on tumor detection and localization were performed in 2D over a single slice of the reconstruction. For the low-sensitivity high-resolution pinhole configurations (i.e., 1.2 mm), we investigated post-reconstruction Gaussian filtering to improve control of the statistical noise. Results: The oblique and central+oblique configurations of AdaptiSPECT-C for all the pinhole sizes showed considerable improvements in terms of quantification in the lesion regions and tumor detection/localization compared to that of a dual-head parallel system. The central combination with a 2.6 mm diameter led to the best tumor detection capability at low iteration. For the same size, the oblique and central+oblique combinations showed similar detection at higher iteration and improved quantification. However, the solely central configuration was the worst combination for accurate tumor localization. The 2.6 mm size in an oblique or central+oblique combination showed improved localization capability at low and medium iteration, while the 4 mm size due to higher sensitivity required more iteration to achieve similar performance. The 1.2 mm size strongly impacted by statistical noise led to poor localization and quantification. Post-reconstruction Gaussian filtering slightly improved the localization accuracy at high iteration while degrading the detection capability. Conclusions: In this work, we demonstrated that a 2.6 mm diameter pinhole in oblique or central+oblique combinations for AdpatiSPECT-C is suited for improved quantification, detection, and localization of cerebral tumors for 123I-CLINDE imaging. Research Support: NIBIB/NIH Grant (R01EB022521).
- Doty, K. J., Kupinski, M. A., Richards, R. G., Ruiz-Gonzalez, M., King, M. A., Kuo, P. H., & Furenlid, L. R. (2021). 3D Position Estimation for the AdaptiSPECT-C Modular Gamma-Ray Cameras. IEEE NSS/MIC. doi:10.1109/nss/mic44867.2021.9875617
- Rahman, A., Zhu, Y., Clarkson, E., Kupinski, M. A., Frey, E. C., & Jha, A. K. (2020). Fisher information analysis of list-mode SPECT emission data for joint estimation of activity and attenuation distribution. Inverse problems, 36(8).More infoThe potential to perform attenuation and scatter compensation (ASC) in single-photon emission computed tomography (SPECT) imaging without a separate transmission scan is highly significant. In this context, attenuation in SPECT is primarily due to Compton scattering, where the probability of Compton scatter is proportional to the attenuation coefficient of the tissue and the energy of the scattered photon and the scattering angle are related. Based on this premise, we investigated whether the SPECT scattered-photon data acquired in list-mode (LM) format and including the energy information can be used to estimate the attenuation map. For this purpose, we propose a Fisher-information-based method that yields the Cramer-Rao bound (CRB) for the task of jointly estimating the activity and attenuation distribution using only the SPECT emission data. In the process, a path-based formalism to process the LM SPECT emission data, including the scattered-photon data, is proposed. The Fisher information method was implemented on NVIDIA graphics processing units (GPU) for acceleration. The method was applied to analyze the information content of SPECT LM emission data, which contains up to first-order scattered events, in a simulated SPECT system with parameters modeling a clinical system using realistic computational studies with 2-D digital synthetic and anthropomorphic phantoms. The method was also applied to LM data containing up to second-order scatter for a synthetic phantom. Experiments with anthropomorphic phantoms simulated myocardial perfusion and dopamine transporter (DaT)-Scan SPECT studies. The results show that the CRB obtained for the attenuation and activity coefficients was typically much lower than the true value of these coefficients. An increase in the number of detected photons yielded lower CRB for both the attenuation and activity coefficients. Further, we observed that systems with better energy resolution yielded a lower CRB for the attenuation coefficient. Overall, the results provide evidence that LM SPECT emission data, including the scattered photons, contains information to jointly estimate the activity and attenuation coefficients.
- Chen, Y., Lou, Y., Wang, K., Kupinski, M. A., & Anastasio, M. A. (2019). Reconstruction-Aware Imaging System Ranking by Use of a Sparsity-Driven Numerical Observer Enabled by Variational Bayesian Inference. IEEE transactions on medical imaging, 38(5), 1251-1262.More infoIt is widely accepted that optimization of imaging system performance should be guided by task-based measures of image quality. It has been advocated that imaging hardware or data-acquisition designs should be optimized by use of an ideal observer that exploits full statistical knowledge of the measurement noise and class of objects to be imaged, without consideration of the reconstruction method. In practice, accurate and tractable models of the complete object statistics are often difficult to determine. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and sparse image reconstruction are innately coupled technologies. In this paper, a sparsity-driven observer (SDO) that can be employed to optimize hardware by use of a stochastic object model describing object sparsity is described and investigated. The SDO and sparse reconstruction method can, therefore, be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute the SDO test statistic, computational tools developed recently for variational Bayesian inference with sparse linear models are adopted. The use of the SDO to rank data-acquisition designs in a stylized example as motivated by magnetic resonance imaging is demonstrated. This paper reveals that the SDO can produce rankings that are consistent with visual assessments of the reconstructed images but different from those produced by use of the traditionally employed Hotelling observer.
- Ding, Y., Barrett, H. H., Kupinski, M. A., Vinogradskiy, Y., Miften, M., & Jones, B. L. (2019). Objective assessment of the effects of tumor motion in radiation therapy. Medical physics, 46(7), 3311-3323.More infoInternal organ motion reduces the accuracy and efficacy of radiation therapy. However, there is a lack of tools to objectively (based on a medical or scientific task) assess the dosimetric consequences of motion, especially on an individual basis. We propose to use therapy operating characteristic (TOC) analysis to quantify the effects of motion on treatment efficacy for individual patients. We demonstrate the application of this tool with pancreatic stereotactic body radiation therapy (SBRT) clinical data and explore the origin of motion sensitivity.
- Gruner, F., Blumendorf, F., Schmutzler, O., Staufer, T., Wiesner, U., Rosentreter, T., Loers, G., Lutz, D., Richter, B., Fischer, M., Schulz, F., Steiner, S., Warmer, M., Burkhardt, A., Meents, A., Hoeschen, C., Bradbury, M. S., & Kupinski, M. A. (2018). Localising functionalised gold-nanoparticles in murine spinal cords by X-ray fluorescence imaging and background-reduction through spatial filtering for human-sized objects.. Scientific reports, 8(1), 16561. doi:10.1038/s41598-018-34925-3More infoAccurate in vivo localisation of minimal amounts of functionalised gold-nanoparticles, enabling e.g. early-tumour diagnostics and pharmacokinetic tracking studies, requires a precision imaging system offering very high sensitivity, temporal and spatial resolution, large depth penetration, and arbitrarily long serial measurements. X-ray fluorescence imaging could offer such capabilities; however, its utilisation for human-sized scales is hampered by a high intrinsic background level. Here we measure and model this anisotropic background and present a spatial filtering scheme for background reduction enabling the localisation of nanoparticle-amounts as reported from small-animal tumour models. As a basic application study towards precision pharmacokinetics, we demonstrate specific localisation to sites of disease by adapting gold-nanoparticles with small targeting ligands in murine spinal cord injury models, at record sensitivity levels using sub-mm resolution. Both studies contribute to the future use of molecularly-targeted gold-nanoparticles as next-generation clinical diagnostic and pharmacokinetic tools.
- Johnson, L. C., Kupinski, M. A., Lin, A., Peterson, T. E., & Shokouhi, S. (2018). Task-based design of a synthetic-collimator SPECT system used for small animal imaging.. Medical physics, 45(7), 2952-2963. doi:10.1002/mp.12952More infoIn traditional multipinhole SPECT systems, image multiplexing - the overlapping of pinhole projection images - may occur on the detector, which can inhibit quality image reconstructions due to photon-origin uncertainty. One proposed system to mitigate the effects of multiplexing is the synthetic-collimator SPECT system. In this system, two detectors, a silicon detector and a germanium detector, are placed at different distances behind the multipinhole aperture, allowing for image detection to occur at different magnifications and photon energies, resulting in higher overall sensitivity while maintaining high resolution. The unwanted effects of multiplexing are reduced by utilizing the additional data collected from the front silicon detector. However, determining optimal system configurations for a given imaging task requires efficient parsing of the complex parameter space, to understand how pinhole spacings and the two detector distances influence system performance..In our simulation studies, we use the ensemble mean-squared error of the Wiener estimator (EMSEW ) as the figure of merit to determine optimum system parameters for the task of estimating the uptake of an 123 I-labeled radiotracer in three different regions of a computer-generated mouse brain phantom. The segmented phantom map is constructed by using data from the MRM NeAt database and allows for the reduction in dimensionality of the system matrix which improves the computational efficiency of scanning the system's parameter space. To contextualize our results, the Wiener estimator is also compared against a region of interest estimator using maximum-likelihood reconstructed data..Our results show that the synthetic-collimator SPECT system outperforms traditional multipinhole SPECT systems in this estimation task. We also find that image multiplexing plays an important role in the system design of the synthetic-collimator SPECT system, with optimal germanium detector distances occurring at maxima in the derivative of the percent multiplexing function. Furthermore, we report that improved task performance can be achieved by using an adaptive system design in which the germanium detector distance may vary with projection angle. Finally, in our comparative study, we find that the Wiener estimator outperforms the conventional region of interest estimator..Our work demonstrates how this optimization method has the potential to quickly and efficiently explore vast parameter spaces, providing insight into the behavior of competing factors, which are otherwise very difficult to calculate and study using other existing means.
- Chen, B., Favazza, C. P., Kofler, J. M., Kupinski, M. A., Leng, S., Mccollough, C. H., & Yu, L. (2017). Correlation between a 2D channelized Hotelling observer and human observers in a low-contrast detection task with multislice reading in CT.. Medical physics, 44(8), 3990-3999. doi:10.1002/mp.12380More infoModel observers have been successfully developed and used to assess the quality of static 2D CT images. However, radiologists typically read images by paging through multiple 2D slices (i.e., multislice reading). The purpose of this study was to correlate human and model observer performance in a low-contrast detection task performed using both 2D and multislice reading, and to determine if the 2D model observer still correlate well with human observer performance in multislice reading..A phantom containing 18 low-contrast spheres (6 sizes × 3 contrast levels) was scanned on a 192-slice CT scanner at five dose levels (CTDIvol = 27, 13.5, 6.8, 3.4, and 1.7 mGy), each repeated 100 times. Images were reconstructed using both filtered-backprojection (FBP) and an iterative reconstruction (IR) method (ADMIRE, Siemens). A 3D volume of interest (VOI) around each sphere was extracted and placed side-by-side with a signal-absent VOI to create a 2-alternative forced choice (2AFC) trial. Sixteen 2AFC studies were generated, each with 100 trials, to evaluate the impact of radiation dose, lesion size and contrast, and reconstruction methods on object detection. In total, 1600 trials were presented to both model and human observers. Three medical physicists acted as human observers and were allowed to page through the 3D volumes to make a decision for each 2AFC trial. The human observer performance was compared with the performance of a multislice channelized Hotelling observer (CHO_MS), which integrates multislice image data, and with the performance of previously validated CHO, which operates on static 2D images (CHO_2D). For comparison, the same 16 2AFC studies were also performed in a 2D viewing mode by the human observers and compared with the multislice viewing performance and the two CHO models..Human observer performance was well correlated with the CHO_2D performance in the 2D viewing mode [Pearson product-moment correlation coefficient R = 0.972, 95% confidence interval (CI): 0.919 to 0.990] and with the CHO_MS performance in the multislice viewing mode (R = 0.952, 95% CI: 0.865 to 0.984). The CHO_2D performance, calculated from the 2D viewing mode, also had a strong correlation with human observer performance in the multislice viewing mode (R = 0.957, 95% CI: 879 to 0.985). Human observer performance varied between the multislice and 2D modes. One reader performed better in the multislice mode (P = 0.013); whereas the other two readers showed no significant difference between the two viewing modes (P = 0.057 and P = 0.38)..A 2D CHO model is highly correlated with human observer performance in detecting spherical low contrast objects in multislice viewing of CT images. This finding provides some evidence for the use of a simpler, 2D CHO to assess image quality in clinically relevant CT tasks where multislice viewing is used.
- Chen, Y., Lou, Y., Kupinski, M. A., & Anastasio, M. A. (2017). Task-based data-acquisition optimization for sparse image reconstruction systems. Proceedings of SPIE, 10136. doi:10.1117/12.2255536More infoConventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.
- Clarkson, E., Ghanbari, N., Kupinski, M. A., & Li, X. (2017). Optimization of an Adaptive SPECT System with the Scanning Linear Estimator.. IEEE transactions on radiation and plasma medical sciences, 1(5), 435-443. doi:10.1109/trpms.2017.2715041More infoA method for optimization of an adaptive Single Photon Emission Computed Tomography (SPECT) system is presented. Adaptive imaging systems can quickly change their hardware configuration in response to data being generated in order to improve image quality for a specific task. In this work we simulate an adaptive SPECT system and propose a method for finding the adaptation that maximizes the performance on a signal estimation task. To start with, a simulated object model containing a spherical signal is imaged with a scout configuration. A Markov-Chain Monte Carlo (MCMC) technique utilizes the scout data to generate an ensemble of possible objects consistent with the scout data. This object ensemble is imaged by numerous simulated hardware configurations and for each system estimates of signal activity, size and location are calculated via the Scanning Linear Estimator (SLE). A figure of merit, based on a Modified Dice Index (MDI), quantifies the performance of each imaging configuration and it allows for optimization of the adaptive SPECT. This figure of merit is calculated by multiplying two terms: the first term uses the definition of the Dice similarity index to determine the percent of overlap between the actual and the estimated spherical signal, the second term utilizes an exponential function that measures the squared error for the activity estimate. The MDI combines the error in estimates of activity, size, and location, in one convenient metric and it allows for simultaneous optimization of the SPECT system with respect to all the estimated signal parameters. The results of our optimizations indicate that the adaptive system performs better than a non-adaptive one in conditions where the diagnostic scan has a low photon count - on the order of thousand photons per projection. In a statistical study, we optimized the SPECT system for one hundred unique objects and demonstrated that the average MDI on an estimation task is 0.84 for the adaptive system and 0.65 for the non-adaptive system.
- Fan, J., Kupinski, M. A., & Tseng, H. W. (2017). Assessing computed tomography image quality for combined detection and estimation tasks.. Journal of medical imaging (Bellingham, Wash.), 4(4), 045503. doi:10.1117/1.jmi.4.4.045503More infoMaintaining or even improving image quality while lowering patient dose is always the desire in clinical computed tomography (CT) imaging. Iterative reconstruction (IR) algorithms have been designed to allow for a reduced dose while maintaining or even improving an image. However, we have previously shown that the dose-saving capabilities allowed with IR are different for different clinical tasks. The channelized scanning linear observer (CSLO) was applied to study clinical tasks that combine detection and estimation when assessing CT image data. The purpose of this work is to illustrate the importance of task complexity when assessing dose savings and to move toward more realistic tasks when performing these types of studies. Human-observer validation of these methods will take place in a future publication. Low-contrast objects embedded in body-size phantoms were imaged multiple times and reconstructed by filtered back projection (FBP) and an IR algorithm. The task was to detect, localize, and estimate the size and contrast of low-contrast objects in the phantom. Independent signal-present and signal-absent regions of interest cropped from images were channelized by the dense-difference of Gauss channels for CSLO training and testing. Estimation receiver operating characteristic (EROC) curves and the areas under EROC curves (EAUC) were calculated by CSLO as the figure of merit. The one-shot method was used to compute the variance of the EAUC values. Results suggest that the IR algorithm studied in this work could efficiently reduce the dose by [Formula: see text] while maintaining an image quality comparable to conventional FBP reconstruction warranting further investigation using real patient data.
- Kupinski, M. A. (2016). Development of an Ideal Observer that Incorporates Nuisance Parameters and Processes List-Mode Data. JOSA A.More infoObserver models were developed to process data in list-mode format in orderto perform binary discrimination tasks for use in an arms-control-treatycontext. Data used in this study was generated using GEANT4 Monte Carlosimulations for photons using custom models of plutonium inspection objects anda radiation imaging system. Observer model performance was evaluated andpresented using the area under the receiver operating characteristic curve. Theideal observer was studied under both signal-known-exactly conditions and inthe presence of unknowns such as object orientation and absolute count-ratevariability; when these additional sources of randomness were present, theirincorporation into the observer yielded superior performance.[Journal_ref: ]
- Macgahan, C. J., Kupinski, M. A., Brubaker, E. M., Hilton, N. R., & Marleau, P. A. (2017). Linear models to perform treaty verification tasks for enhanced information security. Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment, 844(Issue), 147-157. doi:10.1016/j.nima.2016.11.010More infoAbstract Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensional vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.
- Kupinski, M. A. (2016). Linear Models to Perform Treaty Verification Tasks for Enhanced Information Security. Nuclear Instruments and Methods in Physics Research Section A.
- MacGahan, C. J., Kupinski, M. A., Hilton, N. R., Brubaker, E. M., & Johnson, W. C. (2016). Development of an ideal observer that incorporates nuisance parameters and processes list-mode data. Journal of the Optical Society of America. A, Optics, image science, and vision, 33(4), 689-97.More infoObserver models were developed to process data in list-mode format in order to perform binary discrimination tasks for use in an arms-control-treaty context. Data used in this study was generated using GEANT4 Monte Carlo simulations for photons using custom models of plutonium inspection objects and a radiation imaging system. Observer model performance was evaluated and presented using the area under the receiver operating characteristic curve. The ideal observer was studied under both signal-known-exactly conditions and in the presence of unknowns such as object orientation and absolute count-rate variability; when these additional sources of randomness were present, their incorporation into the observer yielded superior performance.
- Tseng, H. W., Fan, J., & Kupinski, M. A. (2016). Design of a practical model-observer-based image quality assessment method for x-ray computed tomography imaging systems. Journal of medical imaging (Bellingham, Wash.), 3(3), 035503.More infoThe use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment.
- Tseng, H. W., Fan, J., Kupinski, M. A., Okerlund, D. R., & Balhorn, W. H. (2016). Quantitative image quality evaluation for cardiac CT reconstructions. Proceedings of SPIE, 9787. doi:10.1117/12.2208341More infoMaintaining image quality in the presence of motion is always desirable and challenging in clinical Cardiac CT imaging. Different image-reconstruction algorithms are available on current commercial CT systems that attempt to achieve this goal. It is widely accepted that image-quality assessment should be task-based and involve specific tasks, observers, and associated figures of merits. In this work, we developed an observer model that performed the task of estimating the percentage of plaque in a vessel from CT images. We compared task performance of Cardiac CT image data reconstructed using a conventional FBP reconstruction algorithm and the SnapShot Freeze (SSF) algorithm, each at default and optimal reconstruction cardiac phases. The purpose of this work is to design an approach for quantitative image-quality evaluation of temporal resolution for Cardiac CT systems. To simulate heart motion, a moving coronary type phantom synchronized with an ECG signal was used. Three different percentage plaques embedded in a 3 mm vessel phantom were imaged multiple times under motion free, 60 bpm, and 80 bpm heart rates. Static (motion free) images of this phantom were taken as reference images for image template generation. Independent ROIs from the 60 bpm and 80 bpm images were generated by vessel tracking. The observer performed estimation tasks using these ROIs. Ensemble mean square error (EMSE) was used as the figure of merit. Results suggest that the quality of SSF images is superior to the quality of FBP images in higher heart-rate scans.
- Barrett, H. H., Myers, K. J., Hoeschen, C., Kupinski, M. A., & Little, M. P. (2015). Task-based measures of image quality and their relation to radiation dose and patient risk. Physics in medicine and biology, 60, R1.
- Clarkson, E., Ghanbari, N., Kupinski, M. A., & Li, X. (2015). Optimization of an adaptive SPECT system with the scanning linear estimator. Proceedings of SPIE, 9594. doi:10.1117/12.2195782More infoThe adaptive single-photon emission computed tomography (SPECT) system studied here acquires an initial scout image to obtain preliminary information about the object. Then the configuration is adjusted by selecting the size of the pinhole and the magnification that optimize system performance on an ensemble of virtual objects generated to be consistent with the scout data. In this study the object is a lumpy background that contains a Gaussian signal with a variable width and amplitude. The virtual objects in the ensemble are imaged by all of the available configurations and the subsequent images are evaluated with the scanning linear estimator to obtain an estimate of the signal width and amplitude. The ensemble mean squared error (EMSE) on the virtual ensemble between the estimated and the true parameters serves as the performance figure of merit for selecting the optimum configuration. The results indicate that variability in the original object background, noise and signal parameters leads to a specific optimum configuration in each case. A statistical study carried out for a number of objects show that the adaptive system on average performs better than its nonadaptive counterpart.
- Huang, J., Tankam, P., Aquavella, J. V., Hindman, H. B., Clarkson, E., Kupinski, M., & Rolland-Thompson, J. (2015). TEAR FILM THICKNESS ESTIMATION USING OPTICAL COHERENCE TOMOGRAPHY AND MAXIMUM-LIKELIHOOD ESTIMATION. Investigative Ophthalmology \& Visual Science, 56, 351--351.
- Jha, A., Barrett, H. H., Frey, E. C., Clarkson, E. W., Caucci, L., & Kupinski, M. A. (2015). Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions. Physics in Medicine and Biology, 6(18), 7359-7385.
- Marron, M. T., Kupinski, M. A., Stopeck, A., Altbach, M. I., Galons, J., Maskarinec, G., Roe, D. J., Thompson, P. A., Thomson, C. A., & Wertheim, B. C. (2015). Abstract P6-01-18: 2-Hydroxyestrone is associated with breast density measured by mammography and fat:water ratio magnetic resonance imaging in women taking tamoxifen. Cancer Research, 75(9_Supplement), P6-01-18-P6-01-18. doi:10.1158/1538-7445.sabcs14-p6-01-18More infoResearch Objectives and Rationale. Tamoxifen (TAM) use has been shown to reduce breast cancer recurrence with the benefit greater in patients who experience a TAM-associated decrease in percent mammographic density(PMD); findings that support PMD as a biomarker of response to TAM. PMD is a radiographic phenomenon of breast fibroglandular tissue that is associated with breast cancer risk. PMD is inversely associated with body mass index (BMI) and sparse data have shown a weak positive association with the sex hormone levels. Limited data exist evaluating the relationship between TAM, 2OHE1:16 a-OHE1 ratio (concentrations previously hypothesized to be associated with a reduced risk of breast cancer) and PBD. Methods. Using cross-sectional baseline breast density (BD) from an ongoing prevention trial of diindolylmethane (DIM) in 121 women receiving TAM, we evaluated BD in relation to circulating TAM metabolites [TAM, endoxifen, 4-OH TAM, ND TAM], estradiol (E2), sex hormone-binding globulin (SHBG) and urinary 2-OHE1 and 16α-OHE1. PMD was assessed by mammography (n=65; 54%) and also a novel, non-radiative, non-contrast magnetic resonance imaging-derived fat-water ratio (FWR-MRI) as the fat fraction (Fra)50 and 80 (n=53; 44%) developed for repeat BD assessment in short intervals. This is our first report that BD using digitized mammograms is correlated with FWR-MRI-derived measures designated Fra50 and Fra80; Spearman ρ = 0.90 and 0.86, respectively, p Results. As previously demonstrated, BMI was inversely correlated with all measures of BD. No association was shown between TAM and TAM metabolites and BD or urinary 2OHE1. Further, we found no relationship between circulating E2 or SHBG concentrations and BD. In contrast, urinary 2OHE1 levels were positively correlated with BD across all measures of density; 2OHE1 levels were most strongly correlated with BD measured by FW-MRI using Fra80 (Spearman ρFra80=0.483; p=0.001 compared to ρFra50 =0.431; p = 0.004 and ρPD=0.400; p=0.003). A significant, but weaker, correlation was observed for the 2OHE1:16OHE1 ratio and BD (ρ values 0.34-0.38). The magnitude of the relationship between 2OHE1 and BD was similar in pre and post-menopausal women despite lower PBD after menopause. Conclusions. Our results replicate earlier work from Maskarinec et al. wherein excreted 2OHE1 was an independent determinant of BD. These data challenge the hypothesis proposed by Yager and Liehr that higher urinary 2OHE1 to 16OHE1 ratio would be indicative of reduced hormone tumorigenesis. These results suggest a possible comparable binding affinity for the estrogen receptor that may modify endogenous steroid hormones and their effects on BD. Our findings strengthen the arguments favoring a better mechanistic understanding of BD, the biological determinants and their relationship to breast cancer. This is particularly timely given new mandates to provide BD measures to all women undergoing mammography and recent findings that while BD is associated with breast cancer risk high BD is not associated with greater breast cancer mortality. Citation Format: Cynthia A Thomson, Patricia A Thompson, Betsy C Wertheim, Denise Roe, Marilyn T Marron, John-Phillipe Galons, Matthew A Kupinski, Maria I Altbach, Gertraud Maskarinec, Alison Stopeck. 2-Hydroxyestrone is associated with breast density measured by mammography and fat:water ratio magnetic resonance imaging in women taking tamoxifen [abstract]. In: Proceedings of the Thirty-Seventh Annual CTRC-AACR San Antonio Breast Cancer Symposium: 2014 Dec 9-13; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2015;75(9 Suppl):Abstract nr P6-01-18.
- Stephen, R. M., Jha, A. K., Roe, D. J., Trouard, T. P., Galons, J., Kupinski, M. A., Frey, G., Cui, H., Squire, S., Pagel, M. D., Rodriguez, J. J., Gillies, R. J., & Stopeck, A. T. (2015). Diffusion MRI with Semi-Automated Segmentation Can Serve as a Restricted Predictive Biomarker of the Therapeutic Response of Liver Metastasis. Magnetic Resonance Imaging, 33(10), 1267-73.
- Stephen, R. M., Roe, D. J., Jha, A. K., Trouard, T. P., Galons, J., Kupinski, M. A., Frey, G., Cui, H., Squire, S., Pagel, M. D., Rodriguez, J. J., Gillies, R. J., & Stopeck, A. T. (2015). Diffusion MRI with Semi-AutomaDiffusion MRI with Semi-Automated Segmentation Can Serve as a Restricted Predictive Biomarker of the Therapeutic Response of Liver Metastasis. Magnetic Resonance Imaging.
- Stephen, R. M., Roe, D. J., Jha, A. K., Trouard, T. P., Galons, J., Kupinski, M. A., Frey, G., Cui, H., Squire, S., Pagel, M. D., Rodriguez, J. J., Gillies, R. J., & Stopeck, A. T. (2015). Diffusion MRI with Semi-Automated Segmentation Can Serve as a Restricted Predictive Biomarker of the Therapeutic Response of Liver Metastasis. Magnetic Resonance Imaging. doi:10.1016/j.mri.2015.08.006
- Stephen, R. M., Stephen, R. M., Roe, D. J., Roe, D. J., Jha, A. K., Jha, A. K., Trouard, T. P., Trouard, T. P., Galons, J., Galons, J., Kupinski, M. A., Kupinski, M. A., Frey, G., Frey, G., Cui, H., Cui, H., Squire, S., Squire, S., Pagel, M. D., , Pagel, M. D., et al. (2015). Diffusion MRI with Semi-AutomaDiffusion MRI with Semi-Automated Segmentation Can Serve as a Restricted Predictive Biomarker of the Therapeutic Response of Liver Metastasis. Magnetic Resonance Imaging. doi:10.1016/j.mri.2015.08.006
- Tseng, H., Fan, J., & Kupinski, M. (2015). SU-F-207-16: CT Protocols Optimization Using Model Observer. Medical physics, 42, 3545--3545.
- Wang, K., Lou, Y., Kupinski, M. A., & Anastasio, M. A. (2015). Sparsity-driven ideal observer for computed medical imaging systems. Proceedings of SPIE, 9416. doi:10.1117/12.2082316More infoThe Bayesian ideal observer (IO) has been widely advocated to guide hardware optimization. However, except for special cases, computation of the IO test statistic is computationally burdensome and requires an appropriate stochastic object model that may be difficult to determine in practice. Modern reconstruction methods, referred to as sparse reconstruction methods, exploit the fact that objects of interest typically possess sparse representations and have proven to be highly effective at reconstructing images from under-sampled measurement data. Moreover, in computed imaging approaches that employ compressive sensing concepts, imaging hardware and image reconstruction are innately coupled technologies. In this work, we propose a sparsity-driven IO (SD-IO) to guide the optimization of data acquisition parameters for modern computed imaging systems. The SD-IO employs a variational Bayesian inference method to estimate the posterior distribution and calculates an approximate likelihood ratio analytically as its test statistic. Since it assumes knowledge of low-level statistical properties of the object that are related to sparsity, the SD-IO exploits the same statistical information regarding the object that is utilized by highly effective sparse image reconstruction methods. Preliminary simulation results are presented to demonstrate the feasibility of the SD-IO calculation.
- Clarkson, E., Huang, J., Kupinski, M. A., Rolland, J. P., & Yuan, Q. (2014). Application of Task-based Assessment in Optical Coherence Tomography in the Context of Tear Film Imaging. Frontiers in Optics. doi:10.1364/fio.2014.fw1e.5More infoIn the context of tear film imaging, we developed a task-based assessment framework that enables a customized OCT system to yield unbiased estimates of thickness down to 20 nm with nanometer-class precision.
- Huang, J., Yuan, Q., Zhang, B., Xu, K., Tankam, P., Clarkson, E., Kupinski, M. A., Hindman, H. B., Aquavella, J. V., Suleski, T. J., & others, . (2014). Measurement of a multi-layered tear film phantom using optical coherence tomography and statistical decision theory. Biomedical Optics Express, 5(12), 4374--4386.
- Myers, K., Bakic, P., Abbey, C., Kupinski, M., & Mertelmeier, T. (2014). TU-A-17A-02: In Memoriam of Ben Galkin: Virtual Tools for Validation of X-Ray Breast Imaging Systems. Medical Physics, 41(6), 446--447.
- Tseng, H. W., Fan, J., Kupinski, M. A., Sainath, P., & Hsieh, J. (2014). Assessing image quality and dose reduction of a new x-ray computed tomography iterative reconstruction algorithm using model observers. Medical Physics, 41(Issue 7). doi:10.1118/1.4881143More infoPurpose: A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). Methods: To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. Results: For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors' task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%-67% (head phantom) and 68%-82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%-75% for head phantom and 67%-77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. Conclusions: The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the model observers using the IR images was always higher than that seen using the FBP images in the authors' SKE and SKE location unknown detection tasks. To achieve a FBP-equivalent image quality in CT systems, the authors can lower the radiation dose by using this IR image reconstruction algorithm. Further studies are warranted using clinical data and human observer to validate these results for more complicated and realistic tasks. © 2014 American Association of Physicists in Medicine.
- Tseng, H., Fan, J., Kupinski, M. A., Sainath, P., & Hsieh, J. (2014). Assessing image quality and dose reduction of a new x-ray computed tomography iterative reconstruction algorithm using model observers. Medical physics, 41(7), 071910.
- Welge, W. A., DeMarco, A. T., Watson, J. M., Rice, P. S., Barton, J. K., & Kupinski, M. A. (2014). Diagnostic potential of multimodal imaging of ovarian tissue using optical coherence tomography and second-harmonic generation microscopy. Journal of Medical Imaging, 1(2), 025501--025501.
- Barrett, H. H., Kupinski, M. A., M\"ueller, S., Halpern, H. J., Morris III, J. C., & Dwyer, R. (2013). Objective assessment of image quality VI: imaging in radiation therapy. Physics in medicine and biology, 58(22), 8197.
- Barrett, H. H., Kupinski, M. A., Müeller, S., Halpern, H. J., Morris, J. C., & Dwyer, R. (2013). Objective assessment of image quality VI: Imaging in radiation therapy. Physics in Medicine and Biology, 58(22), 8197-8213.More infoAbstract: Earlier work on objective assessment of image quality (OAIQ) focused largely on estimation or classification tasks in which the desired outcome of imaging is accurate diagnosis. This paper develops a general framework for assessing imaging quality on the basis of therapeutic outcomes rather than diagnostic performance. By analogy to receiver operating characteristic (ROC) curves and their variants as used in diagnostic OAIQ, the method proposed here utilizes the therapy operating characteristic or TOC curves, which are plots of the probability of tumor control versus the probability of normal-tissue complications as the overall dose level of a radiotherapy treatment is varied. The proposed figure of merit is the area under the TOC curve, denoted AUTOC. This paper reviews an earlier exposition of the theory of TOC and AUTOC, which was specific to the assessment of image-segmentation algorithms, and extends it to other applications of imaging in external-beam radiation treatment as well as in treatment with internal radioactive sources. For each application, a methodology for computing the TOC is presented. A key difference between ROC and TOC is that the latter can be defined for a single patient rather than a population of patients. © 2013 Institute of Physics and Engineering in Medicine.
- Dumas, C., Bernstein, A., Espinoza, A., Morgan, D., Lewis, K., Nipper, M., Barrett, H. H., Kupinski, M. A., & Furenlid, L. R. (2013). SmartCAM: An adaptive clinical SPECT camera. Proceedings of SPIE - The International Society for Optical Engineering, 8853.More infoAbstract: An adaptive pinhole aperture that fits a GE MaxiCam Single-Photon-Emission Computed Tomography (SPECT) system has been designed, built, and is undergoing testing. The purpose of an adaptive aperture is to allow the imaging system to make adjustments to the aperture while imaging data are being acquired. Our adaptive pinhole aperture can alter several imaging parameters, including field of view, resolution, sensitivity, and magnification. The dynamic nature of such an aperture allows for imaging of specific regions of interest based on initial measurements of the patient. Ideally, this mode of data collection will improve the understanding of a patient's condition, and will facilitate better diagnosis and treatment. The aperture was constructed using aluminum and a low melting point, high-stopping-power metal alloy called Cerrobend. The aperture utilizes a rotating disk for the selection of a pinhole configuration; as the aluminum disk rotates, different pinholes move into view of the camera face and allow the passage of gamma rays through that particular pinhole. By controlling the angular position of the disk, the optical characteristics of the aperture can be modified, allowing the system to acquire data from controlled regions of interest. First testing was performed with a small radioactive source to prove the functionality of the aperture. © 2013 SPIE.
- Fan, J., Tseng, H., Kupinski, M., Cao, G., Sainath, P., & Hsieh, J. (2013). Study of the radiation dose reduction capability of a CT reconstruction algorithm - LCD performance assessment using mathematical model observers. Proceedings of SPIE - The International Society for Optical Engineering, 8673.More infoAbstract: Radiation dose on patient has become a major concern today for Computed Tomography (CT) imaging in clinical practice. Various hardware and algorithm solutions have been designed to reduce dose. Among them, iterative reconstruction (IR) has been widely expected to be an effective dose reduction approach for CT. However, there is no clear understanding on the exact amount of dose saving an IR approach can offer for various clinical applications. We know that quantitative image quality assessment should be task-based. This work applied mathematical model observers to study detectability performance of CT scan data reconstructed using an advanced IR approach as well as the conventional filtered back-projection (FBP) approach. The purpose of this work is to establish a practical and robust approach for CT IR detectability image quality evaluation and to assess the dose saving capability of the IR method under study. Low contrast (LC) objects imbedded in head size and body size phantoms were imaged multiple times with different dose levels. Independent signal present and absent pairs were generated for model observer study training and testing. Receiver Operating Characteristic (ROC) curves for location known exact and location ROC (LROC) curves for location unknown as well as their corresponding the area under the curve (AUC) values were calculated. Results showed approximately 3 times dose reduction has been achieved using the IR method under study. © 2013 SPIE.
- Huang, J., Clarkson, E., Kupinski, M., Lee, K., Maki, K. L., Ross, D. S., Aquavella, J. V., & Rolland, J. P. (2013). Maximum-likelihood estimation in Optical Coherence Tomography in the context of the tear film dynamics. Biomedical Optics Express, 4(10), 1806-1816.More infoPMID: 24156045;PMCID: PMC3799647;Abstract: Understanding tear film dynamics is a prerequisite for advancing the management of Dry Eye Disease (DED). In this paper, we discuss the use of optical coherence tomography (OCT) and statistical decision theory to analyze the tear film dynamics of a digital phantom. We implement a maximum-likelihood (ML) estimator to interpret OCT data based on mathematical models of Fourier-Domain OCT and the tear film. With the methodology of task-based assessment, we quantify the tradeoffs among key imaging system parameters. We find, on the assumption that the broadband light source is characterized by circular Gaussian statistics, ML estimates of 40 nm +/- 4 nm for an axial resolution of 1 μm and an integration time of 5 μs. Finally, the estimator is validated with a digital phantom of tear film dynamics, which reveals estimates of nanometer precision. © 2013 Optical Society of America.
- Huang, J., Clarkson, E., Kupinski, M., Lee, K., Maki, K. L., Ross, D. S., Aquavella, J. V., & Rolland, J. P. (2013). Maximum-likelihood estimation in Optical Coherence Tomography in the context of the tear film dynamics. Biomedical optics express, 4(10), 1806--1816.
- Huang, J., Lee, K., Clarkson, E., Kupinski, M., Maki, K. L., Ross, D. S., Aquavella, J. V., & Rolland, J. P. (2013). Phantom study of tear film dynamics with optical coherence tomography and maximum-likelihood estimation. Optics Letters, 38(10), 1721-1723.More infoPMID: 23938923;Abstract: In this Letter, we implement a maximum-likelihood estimator to interpret optical coherence tomography (OCT) data for the first time, based on Fourier-domain OCT and a two-interface tear film model. We use the root mean square error as a figure of merit to quantify the system performance of estimating the tear film thickness. With the methodology of task-based assessment, we study the trade-off between system imaging speed (temporal resolution of the dynamics) and the precision of the estimation. Finally, the estimator is validated with a digital tear-film dynamics phantom. © 2013 Optical Society of America.
- Huang, J., Lee, K., Clarkson, E., Kupinski, M., Maki, K. L., Ross, D. S., Aquavella, J. V., & Rolland, J. P. (2013). Phantom study of tear film dynamics with optical coherence tomography and maximum-likelihood estimation. Optics letters, 38(10), 1721--1723.
- Jha, A. K., Barrett, H. H., Clarkson, E., Caucci, L., & Kupinski, M. A. (2013). Analytic methods for list-mode reconstruction. Intl Meet Fully Three-Dim Image Reconstruction Rad Nucl Med, California.
- Jha, A. K., Clarkson, E., & Kupinski, M. A. (2013). An ideal-observer framework to investigate signal detectability in diffuse optical imaging. Biomedical Optics Express, 4(10), 2107-2123.More infoPMID: 24156068;PMCID: PMC3799670;Abstract: With the emergence of diffuse optical tomography (DOT) as a non-invasive imaging modality, there is a requirement to evaluate the performance of the developed DOT systems on clinically relevant tasks. One such important task is the detection of high-absorption signals in the tissue. To investigate signal detectability in DOT systems for system optimization, an appropriate approach is to use the Bayesian ideal observer, but this observer is computationally very intensive. It has been shown that the Fisher information can be used as a surrogate figure of merit (SFoM) that approximates the ideal observer performance. In this paper, we present a theoretical framework to use the Fisher information for investigating signal detectability in DOT systems. The usage of Fisher information requires evaluating the gradient of the photon distribution function with respect to the absorption coefficients. We derive the expressions to compute the gradient of the photon distribution function with respect to the scattering and absorption coefficients. We find that computing these gradients simply requires executing the radiative transport equation with a different source term. We then demonstrate the application of the SFoM to investigate signal detectability in DOT by performing various simulation studies, which help to validate the proposed framework and also present some insights on signal detectability in DOT. © 2013 Optical Society of America.
- Jha, A. K., Clarkson, E., & Kupinski, M. A. (2013). An ideal-observer framework to investigate signal detectability in diffuse optical imaging. Biomedical optics express, 4(10), 2107--2123.
- Jha, A. K., Clarkson, E., Kupinski, M. A., & Barrett, H. H. (2013). Joint reconstruction of activity and attenuation map using LM SPECT emission data. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 8668.More infoAbstract: Attenuation and scatter correction in single photon emission computed tomography (SPECT) imaging often requires a computed tomography (CT) scan to compute the attenuation map of the patient. This results in increased radiation dose for the patient, and also has other disadvantages such as increased costs and hardware complexity. Attenuation in SPECT is a direct consequence of Compton scattering, and therefore, if the scattered photon data can give information about the attenuation map, then the CT scan may not be required. In this paper, we investigate the possibility of joint reconstruction of the activity and attenuation map using list- mode (LM) SPECT emission data, including the scattered-photon data. We propose a path-based formalism to process scattered-photon data. Following this, we derive analytic expressions to compute the Craḿer-Rao bound (CRB) of the activity and attenuation map estimates, using which, we can explore the fundamental limit of information-retrieval capacity from LM SPECT emission data. We then suggest a maximum-likelihood (ML) scheme that uses the LM emission data to jointly reconstruct the activity and attenuation map. We also propose an expectation-maximization (EM) algorithm to compute the ML solution. © 2013 SPIE.
- Jha, A. K., Dam, H. T., Kupinski, M. A., & Clarkson, E. (2013). Coll. of Opt. Sci., Univ. of Arizona, Tucson, AZ, USA. Nuclear Science, IEEE Transactions on, 60(1), 336--351.
- Jha, A. K., Dam, H. T., Kupinski, M. A., & Clarkson, E. (2013). Simulating Silicon Photomultiplier Response to Scintillation Light. IEEE Transactions on Nuclear Science, 60, 336--351.
- Jha, A. K., Jha, A. K., Kupinski, M. A., Kupinski, M. A., Rodriguez, J. J., Rodriguez, J. J., Stopeck, A. T., & Stopeck, A. T. (2013). Corrigendum: Task-Based Evaluation of Segmentation Algorithms for Diffusion-Weighted MRI without Using a Gold Standard. Physics in Medicine and Biology, 58(1), 183.
- Jha, A. K., Kupinski, M. A., Rodriguez, J. J., Stephen, R. M., & Stopeck, A. T. (2013). Corrigendum: Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard. Physics in Medicine and Biology, 58(1), 183.
- Jha, A. K., T., H., Kupinski, M. A., & Clarkson, E. (2013). Simulating silicon photomultiplier response to scintillation light. IEEE Transactions on Nuclear Science, 60(1), 336-351.More infoAbstract: The response of a Silicon Photomultiplier (SiPM) to optical signals is affected by many factors including photon-detection efficiency, recovery time, gain, optical crosstalk, afterpulsing, dark count, and detector dead time. Many of these parameters vary with overvoltage and temperature. When used to detect scintillation light, there is a complicated non-linear relationship between the incident light and the response of the SiPM. In this paper, we propose a combined discrete-time discrete-event Monte Carlo (MC) model to simulate SiPM response to scintillation light pulses. Our MC model accounts for all relevant aspects of the SiPM response, some of which were not accounted for in the previous models. We also derive and validate analytic expressions for the single-photoelectron response of the SiPM and the voltage drop across the quenching resistance in the SiPM microcell. These analytic expressions consider the effect of all the circuit elements in the SiPM and accurately simulate the time-variation in overvoltage across the microcells of the SiPM. Consequently, our MC model is able to incorporate the variation of the different SiPM parameters with varying overvoltage. The MC model is compared with measurements on SiPM-based scintillation detectors and with some cases for which the response is known a priori. The model is also used to study the variation in SiPM behavior with SiPM-circuit parameter variations and to predict the response of a SiPM-based detector to various scintillators. © 1963-2012 IEEE.
- Jha, A. K., Van Dam, H. T., Kupinski, M. A., & Clarkson, E. (2013). Simulating silicon photomultiplier response to scintillation light. IEEE Transactions on Nuclear Science, 60(Issue 1). doi:10.1109/tns.2012.2234135More infoThe response of a Silicon Photomultiplier (SiPM) to optical signals is affected by many factors including photon-detection efficiency, recovery time, gain, optical crosstalk, afterpulsing, dark count, and detector dead time. Many of these parameters vary with overvoltage and temperature. When used to detect scintillation light, there is a complicated non-linear relationship between the incident light and the response of the SiPM. In this paper, we propose a combined discrete-time discrete-event Monte Carlo (MC) model to simulate SiPM response to scintillation light pulses. Our MC model accounts for all relevant aspects of the SiPM response, some of which were not accounted for in the previous models. We also derive and validate analytic expressions for the single-photoelectron response of the SiPM and the voltage drop across the quenching resistance in the SiPM microcell. These analytic expressions consider the effect of all the circuit elements in the SiPM and accurately simulate the time-variation in overvoltage across the microcells of the SiPM. Consequently, our MC model is able to incorporate the variation of the different SiPM parameters with varying overvoltage. The MC model is compared with measurements on SiPM-based scintillation detectors and with some cases for which the response is known a priori. The model is also used to study the variation in SiPM behavior with SiPM-circuit parameter variations and to predict the response of a SiPM-based detector to various scintillators. © 1963-2012 IEEE.
- Kang, D., & Kupinski, M. A. (2013). Figure of merit for task-based assessment of frequency-domain diffusive imaging. Optics Letters, 38(2), 235-237.More infoPMID: 23454973;Abstract: A figure of merit (FOM) for frequency-domain diffusive imaging (FDDI) is theoretically developed adapting the concept of Hotelling observer signal-to-noise ratio. Different from conventionally used FOMs for FDDI, the newly developed FOM considers diffused intensities, modulation amplitudes, and phases in combination. The FOM applied to Monte Carlo simulations of signal- and background-known-exactly problems shows unique characteristics that are in agreement with findings in the literature. We believe that a task based assessment using the FOM improves the characterization of FDDI systems and allows for complete system optimization. © 2013 Optical Society of America.
- Kang, D., & Kupinski, M. A. (2013). Figure of merit for task-based assessment of frequency-domain diffusive imaging. Optics letters, 38(2), 235--237.
- Kupinski, M., Kang, D., & Kupinski, M. A. (2013). Figure of merit for task-based assessment of frequency-domain diffusive imaging. Optics letters, 38(2).More infoA figure of merit (FOM) for frequency-domain diffusive imaging (FDDI) is theoretically developed adapting the concept of Hotelling observer signal-to-noise ratio. Different from conventionally used FOMs for FDDI, the newly developed FOM considers diffused intensities, modulation amplitudes, and phases in combination. The FOM applied to Monte Carlo simulations of signal- and background-known-exactly problems shows unique characteristics that are in agreement with findings in the literature. We believe that a task based assessment using the FOM improves the characterization of FDDI systems and allows for complete system optimization.
- Lee, C., Kupinski, M. A., & Volokh, L. (2013). Assessment of cardiac single-photon emission computed tomography performance using a scanning linear observer. Medical Physics, 40(1).More infoPMID: 23298097;PMCID: PMC3581138;Abstract: Purpose: Single-photon emission computed tomography (SPECT) is widely used to detect myocardial ischemia and myocardial infarction. It is important to assess and compare different SPECT system designs in order to achieve the highest detectability of cardiac defects. Methods: Whitaker 's study ["Estimating random signal parameters from noisy images with nuisance parameters: linear and scanning-linear methods," Opt. Express 16(11), 8150-8173 (2008)]10.1364/OE.16.008150 on the scanning linear observer (SLO) shows that the SLO can be used to estimate the location and size of signals. One major advantage of the SLO is that it can be used with projection data rather than with reconstruction data. Thus, this observer model assesses the overall hardware performance independent of any reconstruction algorithm. In addition, the computation time of image quality studies is significantly reduced. In this study, three systems based on the design of the GE cadmium zinc telluride-based dedicated cardiac SPECT camera Discovery 530c were assessed. This design, which is officially named the Alcyone Technology: Discovery NM 530c, was commercialized in August, 2009. The three systems, GE27, GE19, and GE13, contain 27, 19, and 13 detectors, respectively. Clinically, a human heart can be virtually segmented into three coronary artery territories: the left-anterior descending artery, left-circumflex artery, and right coronary artery. One of the most important functions of a cardiac SPECT system is to produce images from which a radiologist can accurately predict in which territory the defect exists [http://www.asnc.org/media/PDFs/PPReporting081511.pdf, Guideline from American Society of Nuclear Cardiology]. A good estimation of the extent of the defect from the projection images is also very helpful for determining the seriousness of the myocardial ischemia. In this study, both the location and extent of defects were estimated by the SLO, and the system performance was assessed by localization receiver operating characteristic (LROC) [P. Khurd and G. Gindi, "Decision strategies maximizing the area under the LROC curve," Proc. SPIE 5749, 150-161 (2005)]10.1117/12.595915 or estimation receiver operating characteristic (EROC) [E. Clarkson, "Estimation receiver operating characteristic curve and ideal observers for combined detection/estimation tasks," J. Opt. Soc. Am. A 24, B91-B98 (2007)]10.1364/JOSAA.24.000B91 curves. Results: The area under the LROC/EROC curve (AULC/AUEC) and the true positive fraction (TPF) at a specific false positive fraction (FPF) can be treated as the figures of merit. For radii estimation with a 1 mm tolerance, the AUEC values of the GE27, GE19, and GE13 systems are 0.8545, 0.8488, and 0.8329, and the TPF at FPF = 5% are 77.1%, 76.46%, and 73.55%, respectively. The assessment of all three systems revealed that the GE19 system yields estimated information and cardiac defect detectability very close to those of the GE27 system while using eight fewer detectors. Thus, 30% of the expensive detector units can be removed with confidence. Conclusions: As the results show, a combination of the SLO and LROC/EROC curves can determine the configuration that yields the most relevant estimation/detection information. Thus, this is a useful method for assessing cardiac SPECT systems. © 2013 American Association of Physicists in Medicine.
- Lee, C., Kupinski, M. A., & Volokh, L. (2013). Assessment of cardiac single-photon emission computed tomography performance using a scanning linear observer. Medical physics, 40(1), 011906.
- Tseng, H., Fan, J., Kupinski, M., Sainath, P., & Hsieh, J. (2013). TU-C-103-05: Image Quality and Dose Reduction Evaluation of a New CT Iterative Reconstruction Algorithm Using Model Observers. Medical Physics, 40(6), 437--437.
- Huang, J., Lee, K., Clarkson, E., Kupinski, M., & Rolland, J. P. (2012). Quantitative measurement of tear film dynamics with optical coherence tomography and statistical decision theory. Journal of Vision, 12(14), 39--39.
- Huang, J., Lee, K., Clarkson, E., Kupinski, M., & Rolland, J. P. (2012). Task-based assessment and optimization of spectral domain optical coherence tomography for tear film imaging. Frontiers in Optics, FIO 2012.More infoAbstract: Using task-based assessment method, we adopt detectability as a performance metric to evaluate and optimize spectral domain optical coherence tomography (SD-OCT) for tear film imaging. © OSA 2012.
- Jha, A. K., Kupinski, M. A., & T., H. (2012). Monte Carlo simulation of silicon photomultiplier output in response to scintillation induced light. IEEE Nuclear Science Symposium Conference Record, 1693-1696.More infoAbstract: The response of a Silicon Photomultiplier (SiPM) to optical signals is affected by many factors including optical cross talk, afterpulsing, dark current, detector dead time, recovery time and gain. Many of these parameters vary with over-voltage. When used to detect scintillation light, it is difficult to relate the response of the SiPM with the incident light and the relationship can be highly nonlinear. In this paper, we propose a Monte Carlo (MC) model for simulating the response of the SiPM to scintillation induced light pulses, which can be used to relate the optical signal with the SiPM response. Developing further on the previous works in this field, the model simulates the various aspects of SiPM response, including photon detection efficiency, recovery time, gain variation and dead time while accounting for the temporal and statistical distribution of the incident light, optical cross-talk, afterpulsing and dark current. It also considers the variation of the different SiPM parameters with varying over-voltage. We have also derived analytic expressions for the single photon response and the voltage drop across the quenching resistance, that help in accurate simulation of the SiPM response. The model compares well with the measurements on a SiPM based scintillation detector. It is also in agreement with the expected mathematical response when the input is an instantaneous light pulse. © 2011 IEEE.
- Jha, A. K., Kupinski, M. A., Barrett, H. H., Clarkson, E., & Hartman, J. H. (2012). Three-dimensional Neumann-series approach to model light transport in nonuniform media. JOSA A, 29(9), 1885--1899.
- Jha, A. K., Kupinski, M. A., Masumura, T., Clarkson, E., Maslov, A. V., & Barrett, H. H. (2012). Simulating photon-transport in uniform media using the radiative transport equation: A study using the Neumann-series approach. Journal of the Optical Society of America A: Optics and Image Science, and Vision, 29(8), 1741-1757.More infoPMID: 23201893;PMCID: PMC3985394;Abstract: We present the implementation, validation, and performance of a Neumann-series approach for simulating light propagation at optical wavelengths in uniform media using the radiative transport equation (RTE). The RTE is solved for an anisotropic-scattering medium in a spherical harmonic basis for a diffuse-optical-imaging setup. The main objectives of this paper are threefold: to present the theory behind the Neumann-series form for the RTE, to design and develop the mathematical methods and the software to implement the Neumann series for a diffuse-optical-imaging setup, and, finally, to perform an exhaustive study of the accuracy, practical limitations, and computational efficiency of the Neumann-series method. Through our results, we demonstrate that the Neumann-series approach can be used to model light propagation in uniform media with small geometries at optical wavelengths. © 2012 Optical Society of America.
- Jha, A. K., Kupinski, M. A., Masumura, T., Clarkson, E., Maslov, A. V., & Barrett, H. H. (2012). Simulating photon-transport in uniform media using the radiative transport equation: a study using the Neumann-series approach. JOSA A, 29(8), 1741--1757.
- Jha, A. K., Kupinski, M. A., Rodr\'\iguez, J. J., Stephen, R. M., & Stopeck, A. T. (2012). Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard. Physics in medicine and biology, 57(13), 4425.
- Jha, A. K., Kupinski, M. A., Rodriguez, J. J., Stephen, R. M., & Stopeck, A. T. (2012). Task-Based Evaluation of Segmentation Algorithms for Diffusion-Weighted MRI without Using a Gold Standard. Physics in Medicine and Biology, 57(13), 4425-4446.
- Kang, D., & Kupinski, M. A. (2012). Effect of noise on modulation amplitude and phase in frequency-domain diffusive imaging. Journal of Biomedical Optics, 17(1).More infoPMID: 22352660;Abstract: We theoretically investigate the effect of noise on frequency-domain heterodyne and/or homodyne measurements of intensity-modulated beams propagating through diffusive media, such as a photon density wave. We assumed that the attenuated amplitude and delayed phase are estimated by taking the Fourier transform of the noisy, modulated output data. We show that the estimated amplitude and phase are biased when the number of output photons is small. We also show that the use of image intensifiers for photon amplification in heterodyne or homodyne measurements increases the amount of biases. Especially, it turns out that the biased estimation is independent of AC-dependent noise in sinusoidal heterodyne or homodyne outputs. Finally, the developed theory indicates that the previously known variance model of modulation amplitude and phase is not valid in low light situations. Monte-Carlo simulations with varied numbers of input photons verify our theoretical trends of the bias. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE).
- Kang, D., & Kupinski, M. A. (2012). Effect of noise on modulation amplitude and phase in frequency-domain diffusive imaging. Journal of biomedical optics, 17(1), 0160101--01601010.
- Kang, D., & Kupinski, M. A. (2012). Noise characteristics of heterodyne/homodyne frequency-domain measurements. Journal of Biomedical Optics, 17(1).More infoPMID: 22352646;PMCID: PMC3603149;Abstract: We theoretically develop and experimentally validate the noise characteristics of heterodyne and/or homodyne measurements that are widely used in frequency-domain diffusive imaging. The mean and covariance of the modulated heterodyne output are derived by adapting the random amplification of a temporal point process. A multinomial selection rule is applied to the result of the temporal noise analysis to additionally model the spatial distribution of intensified photons measured by a charge-coupled device (CCD), which shows that the photon detection efficiency of CCD pixels plays an important role in the noise property of detected photons. The approach of using a multinomial probability law is validated from experimental results. Also, experimentally measured characteristics of means and variances of homodyne outputs are in agreement with the developed theory. The developed noise model can be applied to all photon amplification processes. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE).
- Kang, D., & Kupinski, M. A. (2012). Noise characteristics of heterodyne/homodyne frequency-domain measurements. Journal of biomedical optics, 17(1), 0150021--01500211.
- Kupinski, M., Kang, D., & Kupinski, M. A. (2012). Effect of noise on modulation amplitude and phase in frequency-domain diffusive imaging. Journal of biomedical optics, 17(1).More infoWe theoretically investigate the effect of noise on frequency-domain heterodyne and/or homodyne measurements of intensity-modulated beams propagating through diffusive media, such as a photon density wave. We assumed that the attenuated amplitude and delayed phase are estimated by taking the Fourier transform of the noisy, modulated output data. We show that the estimated amplitude and phase are biased when the number of output photons is small. We also show that the use of image intensifiers for photon amplification in heterodyne or homodyne measurements increases the amount of biases. Especially, it turns out that the biased estimation is independent of AC-dependent noise in sinusoidal heterodyne or homodyne outputs. Finally, the developed theory indicates that the previously known variance model of modulation amplitude and phase is not valid in low light situations. Monte-Carlo simulations with varied numbers of input photons verify our theoretical trends of the bias.
- Kupinski, M., Kang, D., & Kupinski, M. A. (2012). Noise characteristics of heterodyne/homodyne frequency-domain measurements. Journal of biomedical optics, 17(1).More infoWe theoretically develop and experimentally validate the noise characteristics of heterodyne and/or homodyne measurements that are widely used in frequency-domain diffusive imaging. The mean and covariance of the modulated heterodyne output are derived by adapting the random amplification of a temporal point process. A multinomial selection rule is applied to the result of the temporal noise analysis to additionally model the spatial distribution of intensified photons measured by a charge-coupled device (CCD), which shows that the photon detection efficiency of CCD pixels plays an important role in the noise property of detected photons. The approach of using a multinomial probability law is validated from experimental results. Also, experimentally measured characteristics of means and variances of homodyne outputs are in agreement with the developed theory. The developed noise model can be applied to all photon amplification processes.
- Clarkson, E., Palit, R., & Kupinski, M. A. (2011). SVD for imaging systems with discrete rotational symmetry. Optics InfoBase Conference Papers.More infoAbstract: In the presence of discrete rotational symmetry for a tomographic imaging system we show that the dimension of the SVD computation can be reduced by a factor equal to the number of collection angles. © 2011 OSA.
- Kang, D., & Kupinski, M. A. (2011). Signal detectability in diffusive media using phased arrays in conjunction with detector arrays. Optics Express, 19(13), 12261-12274.More infoPMID: 21716463;Abstract: We investigate Hotelling observer performance (i.e., signal detectability) of a phased array system for tasks of detecting small inhomogeneities and distinguishing adjacent abnormalities in uniform diffusive media. Unlike conventional phased array systems where a single detector is located on the interface between two sources, we consider a detector array, such as a CCD, on a phantom exit surface for calculating the Hotelling observer detectability. The signal detectability for adjacent small abnormalities (2mm displacement) for the CCD-based phased array is related to the resolution of reconstructed images. Simulations show that acquiring high-dimensional data from a detector array in a phased array system dramatically improves the detectability for both tasks when compared to conventional single detector measurements, especially at low modulation frequencies. It is also observed in all studied cases that there exists the modulation frequency optimizing CCD-based phased array systems, where detectability for both tasks is consistently high. These results imply that the CCD-based phased array has the potential to achieve high resolution and signal detectability in tomographic diffusive imaging while operating at a very low modulation frequency. The effect of other configuration parameters, such as a detector pixel size, on the observer performance is also discussed. © 2011 Optical Society of America.
- Kang, D., & Kupinski, M. A. (2011). Signal detectability in diffusive media using phased arrays in conjunction with detector arrays. Optics express, 19(13), 12261--12274.
- Kupinski, M., Kang, D., & Kupinski, M. A. (2011). Signal detectability in diffusive media using phased arrays in conjunction with detector arrays. Optics express, 19(13).More infoWe investigate Hotelling observer performance (i.e., signal detectability) of a phased array system for tasks of detecting small inhomogeneities and distinguishing adjacent abnormalities in uniform diffusive media. Unlike conventional phased array systems where a single detector is located on the interface between two sources, we consider a detector array, such as a CCD, on a phantom exit surface for calculating the Hotelling observer detectability. The signal detectability for adjacent small abnormalities (2 mm displacement) for the CCD-based phased array is related to the resolution of reconstructed images. Simulations show that acquiring high-dimensional data from a detector array in a phased array system dramatically improves the detectability for both tasks when compared to conventional single detector measurements, especially at low modulation frequencies. It is also observed in all studied cases that there exists the modulation frequency optimizing CCD-based phased array systems, where detectability for both tasks is consistently high. These results imply that the CCD-based phased array has the potential to achieve high resolution and signal detectability in tomographic diffusive imaging while operating at a very low modulation frequency. The effect of other configuration parameters, such as a detector pixel size, on the observer performance is also discussed.
- Barrett, H. H., Wilson, D. W., Kupinski, M. A., Aguwa, K., Ewell, L., Hunter, R., & Müller, S. (2010). Therapy operating characteristic (TOC) curves and their application to the evaluation of segmentation algorithms. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 7627.More infoAbstract: This paper presents a general framework for assessing imaging systems and image-analysis methods on the basis of therapeutic rather than diagnostic efficacy. By analogy to receiver operating characteristic (ROC) curves, it introduces the Therapy Operating Characteristic or TOC curve, which is a plot of the probability of tumor control vs. the probability of normal-tissue complications as the overall level of a radiotherapy treatment beam is varied. The proposed figure of merit is the area under the TOC, denoted AUTOC. If the treatment planning algorithm is held constant, AUTOC is a metric for the imaging and image-analysis components, and in particular for segmentation algorithms that are used to delineate tumors and normal tissues. On the other hand, for a given set of segmented images, AUTOC can also be used as a metric for the treatment plan itself. A general mathematical theory of TOC and AUTOC is presented and then specialized to segmentation problems. Practical approaches to implementation of the theory in both simulation and clinical studies are presented. The method is illustrated with a a brief study of segmentation methods for prostate cancer. © 2010 Copyright SPIE - The International Society for Optical Engineering.
- Clarkson, E., Palit, R., & Kupinski, M. A. (2010). SVD for imaging systems with discrete rotational symmetry. Optics Express, 18(24), 25306-25320.More infoPMID: 21164879;PMCID: PMC3027225;Abstract: The singular value decomposition (SVD) of an imaging system is a computationally intensive calculation for tomographic imaging systems due to the large dimensionality of the system matrix. The computation often involves memory and storage requirements beyond those available to most end users. We have developed a method that reduces the dimension of the SVD problem towards the goal of making the calculation tractable for a standard desktop computer. In the presence of discrete rotational symmetry we show that the dimension of the SVD computation can be reduced by a factor equal to the number of collection angles for the tomographic system. In this paper we present the mathematical theory for our method, validate that our method produces the same results as standard SVD analysis, and finally apply our technique to the sensitivity matrix for a clinical CT system. The ability to compute the full singular value spectra and singular vectors could augment future work in system characterization, image-quality assessment and reconstruction techniques for tomographic imaging systems. © 2010 Optical Society of America.
- Clarkson, E., Palit, R., & Kupinski, M. A. (2010). SVD for imaging systems with discrete rotational symmetry. Optics express, 18(24), 25306--25320.
- Hesterman, J. Y., Caucci, L., Kupinski, M. A., Barrett, H. H., & Furenlid, L. R. (2010). Maximum-likelihood estimation with a contracting-grid search algorithm. IEEE Transactions on Nuclear Science, 57(3 PART 1), 1077-1084.More infoAbstract: A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20 × increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250 × increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. © 2010 IEEE.
- Hesterman, J. Y., Caucci, L., Kupinski, M. A., Barrett, H. H., & Furenlid, L. R. (2010). Maximum-likelihood estimation with a contracting-grid search algorithm. Nuclear Science, IEEE Transactions on, 57(3), 1077--1084.
- Jha, A. K., Kupinski, M. A., Rodríguez, J. J., Stephen, R. M., & Stopeck, A. T. (2010). ADC estimation in multi-scan DWMRI. Optics InfoBase Conference Papers.More infoAbstract: A maximum-likelihood-based scheme for estimating the Apparent Diffusion Coefficient (ADC) value in diffusion-weighted MRI is presented, using which data from multiple scans acquired at the same diffusion-gradient value can be used for accurate ADC computation. © 2010 Optical Society of America.
- Kupinski, M. A., Clarkson, E., Jha, A. K., & Kang, D. (2010). Solutions to the Radiative Transport Equation for Non-uniform Media. Biomedical optics. doi:10.1364/biomed.2010.bsud55More infoA method for modeling the 3-D propagation of photons in non-uniform media based on the radiative transport equation is presented and demonstrated to work on homogeneous and heterogeneous tissue-like phantoms.
- Kupinski, M., Clarkson, E., Palit, R., & Kupinski, M. A. (2010). SVD for imaging systems with discrete rotational symmetry. Optics express, 18(24).More infoThe singular value decomposition (SVD) of an imaging system is a computationally intensive calculation for tomographic imaging systems due to the large dimensionality of the system matrix. The computation often involves memory and storage requirements beyond those available to most end users. We have developed a method that reduces the dimension of the SVD problem towards the goal of making the calculation tractable for a standard desktop computer. In the presence of discrete rotational symmetry we show that the dimension of the SVD computation can be reduced by a factor equal to the number of collection angles for the tomographic system. In this paper we present the mathematical theory for our method, validate that our method produces the same results as standard SVD analysis, and finally apply our technique to the sensitivity matrix for a clinical CT system. The ability to compute the full singular value spectra and singular vectors will augment future work in system characterization, image-quality assessment and reconstruction techniques for tomographic imaging systems.
- Young, S., Kupinski, M. A., & Jha, A. K. (2010). Estimating Signal Detectability in a Model Diffuse Optical Imaging System. Biomedical optics. doi:10.1364/biomed.2010.bsud26More infoDiffuse optical imaging (DOI) researchers need metrics for quantifying signal detectability to assess different hardware configurations. Using Monte Carlo and statistical model observers, we estimated DOI signal detectability to compare source, signal, and detector parameters.
- Clarkson, E., & Kupinski, M. A. (2009). Global compartmental pharmacokinetic models for spatiotemporal SPECT and PET imaging. SIAM Journal on Imaging Sciences, 2(Issue 1). doi:10.1137/080715226More infoA new mathematical framework is introduced for combining the linear compartmental models used in pharmacokinetics with the spatiotemporal distributions of activity that are measured in single photon emission computed tomography (SPECT) and PET imaging. This approach is global in the sense that the compartmental differential equations involve only the overall spatially integrated activity in each compartment. The kinetics for the local compartmental activities are not specified by the model and would be determined from data. It is shown that an increase in information about the spatial distribution of the local compartmental activities leads to an increase in the number of identifiable quantities associated with the compartmental matrix. These identifiable quantities, which are important kinetic parameters in applications, are determined by computing the invariants of a symmetry group. This group generates the space of compartmental matrices that are compatible with a given activity distribution, input function, and set of support constraints. An example is provided where all of the compartmental spatial supports have been separated, except that of the vascular compartment. The question of estimating the identifiable parameters from SPECT and PET data is also discussed.
- Clarkson, E., & Kupinski, M. A. (2009). Global compartmental pharmacokinetic models for spatiotemporal SPECT and PET imaging. SIAM journal on imaging sciences, 2(1), 203--225.
- Kupinski, M., Clarkson, E., & Kupinski, M. A. (2009). Global Compartmental Pharmacokinetic Models for Spatiotemporal SPECT and PET Imaging. SIAM journal on imaging sciences, 2(1).More infoA new mathematical framework is introduced for combining the linear compartmental models used in pharmacokinetics with the spatiotemporal distributions of activity that are measured in single photon emission computed tomography (SPECT) and PET imaging. This approach is global in the sense that the compartmental differential equations involve only the overall spatially integrated activity in each compartment. The kinetics for the local compartmental activities are not specified by the model and would be determined from data. It is shown that an increase in information about the spatial distribution of the local compartmental activities leads to an increase in the number of identifiable quantities associated with the compartmental matrix. These identifiable quantities, which are important kinetic parameters in applications, are determined by computing the invariants of a symmetry group. This group generates the space of compartmental matrices that are compatible with a given activity distribution, input function, and set of support constraints. An example is provided where all of the compartmental spatial supports have been separated, except that of the vascular compartment. The question of estimating the identifiable parameters from SPECT and PET data is also discussed.
- Palit, R., Kupinski, M. A., Barrett, H. H., Clarkson, E. W., Aarsvold, J. N., Volokh, L., & Grobshtein, Y. (2009). Singular value decomposition of pinhole SPECT systems. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 7263.More infoAbstract: A single photon emission computed tomography (SPECT) imaging system can be modeled by a linear operator H that maps from object space to detector pixels in image space. The singular vectors and singular-value spectra of H provide useful tools for assessing system performance. The number of voxels used to discretize object space and the number of collection angles and pixels used to measure image space make the matrix dimensions H large. As a result, H must be stored sparsely which renders several conventional singular value decomposition (SVD) methods impractical. We used an iterative power methods SVD algorithm (Lanczos) designed to operate on very large sparsely stored matrices to calculate the singular vectors and singular-value spectra for two small animal pinhole SPECT imaging systems: FastSPECT II and M3R. The FastSPECT II system consisted of two rings of eight scintillation cameras each. The resulting dimensions of H were 68921 voxels by 97344 detector pixels. The M3R system is a four camera system that was reconfigured to measure image space using a single scintillation camera. The resulting dimensions of H were 50864 voxels by 6241 detector pixels. In this paper we present results of the SVD of each system and discuss calculation of the measurement and null space for each system.
- Barrett, H. H., Furenlid, L. R., Freed, M., Hesterman, J. Y., Kupinski, M. A., Clarkson, E., & Whitaker, M. K. (2008). Adaptive SPECT. IEEE Transactions on Medical Imaging, 27(6), 775-788.More infoPMID: 18541485;PMCID: PMC2575754;Abstract: Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. © 2006 IEEE.
- Barrett, H. H., Furenlid, L. R., Freed, M., Hesterman, J. Y., Kupinski, M. A., Clarkson, E., & Whitaker, M. K. (2008). Adaptive SPECT. Medical Imaging, IEEE Transactions on, 27(6), 775--788.
- Caucci, L., Kupinski, M. A., Freed, M., Furenlid, L. R., Wilson, D. W., & Barrett, H. H. (2008). Adaptive SPECT for tumor necrosis detection. IEEE Nuclear Science Symposium Conference Record, 5548-5551.More infoAbstract: In this paper, we consider a prototype of an adaptive SPECT system, and we use simulation to objectively assess the system's performance with respect to a conventional, non-adaptive SPECT system. Objective performance assessment is investigated for a clinically relevant task: the detection of tumor necrosis at a known location and in a random lumpy background. The iterative maximum-likelihood expectation-maximization (MLEM) algorithm is used to perform image reconstruction. We carried out human observer studies on the reconstructed images and compared the probability of correct detection when the data are generated with the adaptive system as opposed to the non-adaptive system. Task performance is also assessed by using a channelized Hotelling observer, and the area under the receiver operating characteristic curve is the figure of merit for the detection task. Our results show a large performance improvement of adaptive systems versus non-adaptive systems and motivate further research in adaptive medical imaging. © 2008 IEEE.
- Clarkson, E., Kupinski, M. A., Barrett, H. H., & Furenlid, L. (2008). A task-based approach to adaptive and multimodality imaging. Proceedings of the IEEE, 96(3), 500--511.
- Clarkson, E., Kupinski, M. A., Barrett, H. H., & Furenlid, L. (2008). A task-based approach to adaptive and multimodality imaging. Proceedings of the IEEE, 96(3), 500-511.More infoAbstract: Multimodality imaging is becoming increasingly important in medical imaging. Since the motivation for combining multiple imaging modalities is generally to improve diagnostic or prognostic accuracy, the benefits of multimodality imaging cannot be assessed through the display of example images. Instead, we must use objective, task-based measures of image quality to draw valid conclusions about system performance. In this paper, we will present a general framework for utilizing objective, task-based measures of image quality in assessing multimodality and adaptive imaging systems. We introduce a classification scheme for multimodality and adaptive imaging systems and provide a mathematical description of the imaging chain along with block diagrams to provide a visual illustration. We show that the task-based methodology developed for evaluating single-modality imaging can be applied, with minor modifications, to multimodality and adaptive imaging. We discuss strategies for practical implementing of task-based methods to assess and optimize multimodality imaging systems. © 2006 IEEE.
- Freed, M., Kupinski, M. A., Furenlid, L. R., Wilson, D. W., & Barrett, H. H. (2008). A prototype instrument for single pinhole small animal adaptive SPECT imaging. Medical Physics, 35(5), 1912-1925.More infoPMID: 18561667;PMCID: PMC2575412;Abstract: The authors have designed and constructed a small-animal adaptive SPECT imaging system as a prototype for quantifying the potential benefit of adaptive SPECT imaging over the traditional fixed geometry approach. The optical design of the system is based on filling the detector with the region of interest for each viewing angle, maximizing the sensitivity, and optimizing the resolution in the projection images. Additional feedback rules for determining the optimal geometry of the system can be easily added to the existing control software. Preliminary data have been taken of a phantom with a small, hot, offset lesion in a flat background in both adaptive and fixed geometry modes. Comparison of the predicted system behavior with the actual system behavior is presented, along with recommendations for system improvements. © 2008 American Association of Physicists in Medicine.
- Freed, M., Kupinski, M. A., Furenlid, L. R., Wilson, D. W., & Barrett, H. H. (2008). A prototype instrument for single pinhole small animal adaptive SPECT imaging. Medical physics, 35(5), 1912--1925.
- Furenlid, L. R., Moore, J. W., Freed, M., Kupinski, M. A., Clarkson, E., Liu, Z., Wilson, D. W., Woolfenden, J. M., & Barrett, H. H. (2008). Adaptive small-animal SPECT/CT. 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Proceedings, ISBI, 1407-1410.More infoAbstract: We are exploring the concept of adaptive multimodality imaging, a form of non-linear optimization where the imaging configuration is automatically adjusted in response to the object. Preliminary studies suggest that substantial improvement in objective, task-based measures of image quality can result. We describe here our work to add motorized adjustment capabilities and a matching CT to our existing FastSPECT II system to form an adaptive small-animal SPECT/CT. ©2008 IEEE.
- Kupinski, M. A., Barrett, H. H., Clarkson, E., Furenlid, L. R., & Kupinski, K. (2008). A Task-Based Approach to Adaptive and Multimodality Imaging: Computation techniques are proposed for figures-of-merit to establish feasibility and optimize use of multiple imaging systems for disease diagnosis and treatment-monitoring.. Proceedings of the IEEE. Institute of Electrical and Electronics Engineers, 96(3), 500-511. doi:10.1109/jproc.2007.913553More infoMultimodality imaging is becoming increasingly important in medical imaging. Since the motivation for combining multiple imaging modalities is generally to improve diagnostic or prognostic accuracy, the benefits of multimodality imaging cannot be assessed through the display of example images. Instead, we must use objective, task-based measures of image quality to draw valid conclusions about system performance. In this paper, we will present a general framework for utilizing objective, task-based measures of image quality in assessing multimodality and adaptive imaging systems. We introduce a classification scheme for multimodality and adaptive imaging systems and provide a mathematical description of the imaging chain along with block diagrams to provide a visual illustration. We show that the task-based methodology developed for evaluating single-modality imaging can be applied, with minor modifications, to multimodality and adaptive imaging. We discuss strategies for practical implementing of task-based methods to assess and optimize multimodality imaging systems.
- Breme, A., Kupinski, M., Clarkson, E., & Barrett, H. (2007). Adaptive Hotelling discriminant functions [6515-16]. PROCEEDINGS-SPIE THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, 6515, 65150T.
- Brème, A., Kupinski, M. A., Clarkson, E., & Barrett, H. H. (2007). Adaptive hotelling discriminant functions. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 6515.More infoAbstract: Any observer performing a detection task on an image produces a single number that represents the observer's confidence that a signal (e.g., a tumor) is present. A linear observer produces this test statistic using a linear template or a linear discriminant. The optimal linear discriminant is well-known to be the Hotelling observer and uses both first- and second-order statistics of the image data. There are many situations where it is advantageous to consider discriminant functions that adapt themselves to some characteristics of the data. In these situations, the linear template is itself a function of the data and, thus, the observer is nonlinear. In this paper, we present an example adaptive Hotelling discriminant and compare the performance of this observer to that of the Hotelling observer and the Bayesian ideal observer. The task is to detect a signal that is imbedded in one of a finite number of possible random backgrounds. Each random background is Gaussian but has different covariance properties. The observer uses the image data to determine which background type is present and then uses the template appropriate for that background. We show that the performance of this particular observer falls between that of Hotelling and ideal observers.
- Freed, M., Kupinski, M. A., Furenlid, L. R., & Barrett, H. H. (2007). A prototype instrument for adaptive SPECT imaging. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 6510(PART 1).More infoAbstract: We have designed and constructed a small-animal adaptive SPECT imaging system as a prototype for quantifying the potential benefit of adaptive SPECT imaging over the traditional fixed geometry approach. The optical design of the system is based on filling the detector with the object for each viewing angle, maximizing the sensitivity, and optimizing the resolution in the projection images. Additional feedback rules for determining the optimal geometry of the system can be easily added to the existing control software. Preliminary data have been taken of a phantom with a small, hot, offset lesion in a flat background in both adaptive and fixed geometry modes. Comparison of the predicted system behavior with the actual system behavior is presented along with recommendations for system improvements.
- Hagen, N., Kupinski, M., & Dereniak, E. L. (2007). Gaussian profile estimation in one dimension. Applied optics, 46(22), 5374--5383.
- Hesterman, J. Y., Kupinski, M. A., Clarkson, E., & Barrett, H. H. (2007). Hardware assessment using the multi-module, multi-resolution system (M 3 R): A signal-detection study. Medical Physics, 34(7), 3034-3044.More infoPMID: 17822011;PMCID: PMC2471875;Abstract: The multi-module, multi-resolution system (M3 R) is used for hardware assessment in objective, task-based signal detection studies in projection data. A phantom capable of generating multiple realizations of a random textured background is introduced. Measured backgrounds from this phantom are used along with simulated lumpy and uniform backgrounds to investigate signal-to-noise ratio as a function of exposure time. Results are shown to agree with theoretical predictions, exhibiting a power-law like dependence previously seen for studies performed either in simulation or without an imaging system, and help validate the use of simulated lumpy backgrounds in observer studies. A second study looks at signal-detection performance, measured by AUC (area under the receiver operating characteristic curve), in lumpy backgrounds for 20 M 3 R aperture combinations as a function of lump size and signal size. Observer performance reveals an improvement in AUC for certain ranges of signal and lump combinations through the use of multiplexed, multiple-pinhole apertures, indicating a need for task-specific aperture optimization. The channelized Hotelling observer is used with Laguerre-Gauss channels for both observer studies. Methods for selection of number of channels and channel width are discussed. © 2007 American Association of Physicists in Medicine.
- Hesterman, J. Y., Kupinski, M. A., Clarkson, E., & Barrett, H. H. (2007). Hardware assessment using the multi-module, multi-resolution system (M3R): A signal-detection study. Medical physics, 34(7), 3034--3044.
- Hesterman, J. Y., Kupinski, M. A., Clarkson, E., & Whitaker, M. K. (2007). Adaptive SPECT.
- Hesterman, J. Y., Kupinski, M. A., Clarkson, E., Wilson, D. W., & Barrett, H. H. (2007). Evaluation of hardware in a small-animal SPECT system using reconstructed images. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 6515.More infoAbstract: Evaluation of imaging hardware represents a vital component of system design. In small-animal SPECT imaging, this evaluation has become increasingly difficult with the emergence of multi-pinhole apertures and adaptive, or patient-specific, imaging. This paper will describe two methods for hardware evaluation using reconstructed images. The first method is a rapid technique incorporating a system-specific non-linear, three-dimensional point response. This point response is easily computed and offers qualitative insight into an aperture's resolution and artifact characteristics. The second method is an objective assessment of signal detection in lumpy backgrounds using the channelized Hotelling observer (CHO) with 3D Laguerre-Gauss and difference-of-Gaussian channels to calculate area under the receiver-operating characteristic curve (AUC). Previous work presented at this meeting described a unique, small-animal SPECT system (M 3R) capable of operating under a myriad of hardware configurations and ideally suited for image quality studies. Measured system matrices were collected for several hardware configurations of M 3R. The data used to implement these two methods was then generated by taking simulated objects through the measured system matrices. The results of these two methods comprise a combination of qualitative and quantitative analysis that is well-suited for hardware assessment.
- Hesterman, J. Y., Kupinski, M. A., Furenlid, L. R., Wilson, D. W., & Barrett, H. H. (2007). The multi-module, multi-resolution system (M3 R): A novel small-animal SPECT system. Medical Physics, 34(3), 987-993.More infoPMID: 17441245;PMCID: PMC2517228;Abstract: We have designed and built an inexpensive, high-resolution, tomographic imaging system, dubbed the multi-module, multi-resolution system, or M3 R. Slots machined into the system shielding allow for the interchange of pinhole plates, enabling the system to operate over a wide range of magnifications and with virtually any desired pinhole configuration. The flexibility of the system allows system optimization for specific imaging tasks and also allows for modifications necessary due to improved detectors, electronics, and knowledge of system construction (e.g., system sensitivity optimization). We provide an overview of M3 R, focusing primarily on system design and construction, aperture construction, and calibration methods. Reconstruction algorithms will be described and reconstructed images presented. © American Association of Physicists in Medicine.
- Hesterman, J. Y., Kupinski, M. A., Furenlid, L. R., Wilson, D. W., & Barrett, H. H. (2007). The multi-module, multi-resolution system (M3R): a novel small-animal SPECT system. Medical physics, 34(3), 987--993.
- Hesterman, J., Kupinski, M., Clarkson, E., Wilson, D., & Barrett, H. (2007). Evaluation of hardware in a small-animal SPECT system using reconstructed images [6515-36]. PROCEEDINGS-SPIE THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, 6515, 65151G.
- Kupinski, M. A., Clarkson, E., & Hesterman, J. Y. (2007). Bias in hotelling observer performance computed from finite data. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 6515.More infoAbstract: An observer performing a detection task analyzes an image and produces a single number, a test statistic, for that image. This test statistic represents the observers "confidence" that a signal (e.g., a tumor) is present. The linear observer that maximizes the test-statistic SNR is known as the Hotelling observer. Generally, computation of the Hotelling SNR, or Hotelling trace, requires the inverse of a large covariance matrix. Recent developments have resulted in methods for the estimation and inversion of these large covariance matrices with relatively small numbers of images. The estimation and inversion of these matrices is made possible by a covariance-matrix decomposition that splits the full covariance matrix into an average detector-noise component and a background-variability component. Because the average detector-noise component is often diagonal and/or easily estimated, a full-rank, invertible covariance matrix can be produced with few images. We have studied the bias of estimates of the Hotelling trace using this decomposition for high-detector-noise and low-detector-noise situations. In extremely low-noise situations, this covariance decomposition may result in a significant bias. We will present a theoretical evaluation of the Hotelling-trace bias, as well as extensive simulation studies.
- Kupinski, M. A., Watson, A. B., Siewerdsen, J. H., Myers, K. J., & Eckstein, M. (2007). Image Quality. JOSA A, 24(12), IQ1--IQ1.
- Kupinski, M. A., Watson, A. B., Siewerdsen, J. H., Myers, K. J., & Eckstein, M. (2007). Image quality: Introduction. Journal of the Optical Society of America A: Optics and Image Science, and Vision, 24(12).
- Park, S., Barrett, H. H., Clarkson, E., Kupinski, M. A., & Myers, K. J. (2007). Channelized-ideal observer using Laguerre-Gauss channels in detection tasks involving non-Gaussian distributed lumpy backgrounds and a Gaussian signal. JOSA A, 24(12), B136--B150.
- Park, S., Barrett, H. H., Clarkson, E., Kupinski, M. A., & Myers, K. J. (2007). Channelized-ideal observer using Laguerre-Gauss channels in detection tasks involving non-Gaussian distributed lumpy backgrounds and a Gaussian signal. Journal of the Optical Society of America A: Optics and Image Science, and Vision, 24(12), B136-B150.More infoPMID: 18059906;PMCID: PMC2655642;Abstract: We investigate a channelized-ideal observer (CIO) with Laguerre-Gauss (LG) channels to approximate ideal-observer performance in detection tasks involving non-Gaussian distributed lumpy backgrounds and a Gaussian signal. A Markov-chain Monte Carlo approach is employed to determine the performance of both the ideal observer and the CIO using a large number of LG channels. Our results indicate that the CIO with LG channels can approximate ideal-observer performance within error bars, depending on the imaging system, object, and channel parameters. The CIO also outperforms a channelized-Hotelling observer using the same channels. In addition, an alternative approach for estimating the CIO is investigated. This approach makes use of the characteristic functions of channelized data and employs an approximation method to the area under the receiver operating characteristic curve. The alternative approach provides good estimates of the performance of the CIO with five LG channels. However, for large channel cases, more efficient computational methods need to be developed for the CIO to become useful in practice. © 2007 Optical Society of America.
- Clarkson, E., Kupinski, M. A., & Barrett, H. H. (2006). A Probabilistic Model for the MRMC Method, Part 1: Theoretical Development. Academic Radiology, 13(11), 1410-1421.More infoPMID: 17070460;PMCID: PMC2844793;Abstract: Rationale and Objectives: Current approaches to receiver operating characteristic (ROC) analysis use the MRMC (multiple-reader, multiple-case) paradigm in which several readers read each case and their ratings (or scores) are used to construct an estimate of the area under the ROC curve or some other ROC-related parameter. Standard practice is to decompose the parameter of interest according to a linear model into terms that depend in various ways on the readers, cases, and modalities. Though the methodologic aspects of MRMC analysis have been studied in detail, the literature on the probabilistic basis of the individual terms is sparse. In particular, few articles state what probability law applies to each term and what underlying assumptions are needed for the assumed independence. When probability distributions are specified for these terms, these distributions are assumed to be Gaussians. Materials and Methods: This article approaches the MRMC problem from a mechanistic perspective. For a single modality, three sources of randomness are included: the images, the reader skill, and the reader uncertainty. The probability law on the reader scores is written in terms of three nested conditional probabilities, and random variables associated with this probability are referred to as triply stochastic. Results: In this article, we present the probabilistic MRMC model and apply this model to the Wilcoxon statistic. The result is a seven-term expansion for the variance of the figure of merit. Conclusion: We relate the terms in this expansion to those in the standard, linear MRMC model. Finally, we use the probabilistic model to derive constraints on the coefficients in the seven-term expansion. © 2006 AUR.
- Clarkson, E., Kupinski, M. A., & Barrett, H. H. (2006). A probabilistic model for the MRMC method, part 1: Theoretical development. Academic radiology, 13(11), 1410--1421.
- Kupinski, M. A., Clarkson, E., & Barrett, H. H. (2006). A Probabilistic Model for the MRMC Method, Part 2: Validation and Applications. Academic Radiology, 13(11), 1422-1430.More infoPMID: 17070461;PMCID: PMC2077079;Abstract: Rationale and Objectives: We have previously described a probabilistic model for the multiple-reader, multiple-case paradigm for receiver operating characteristic analysis. When the figure of merit is the Wilcoxon statistic, this model returns a seven-term expansion for the variance of this statistic as a function of the numbers of cases and readers. This probabilistic model also provides expressions for the coefficients in the seven-term expansion in terms of expectations over the internal noise, readers, and cases. Finally, this probabilistic model sets bounds on both the overall variance of the Wilcoxon statistic and the individual coefficients. Materials and Methods: In this article, we will first validate the probabilistic model by comparing variances determined by direct computation of the expansion coefficients to empirical estimates of the variance using independent sampling. Validation of the probabilistic model will enable us to use the direct estimates of the expansion coefficients as a gold standard to compare other coefficient-estimation techniques. Next, we develop a coefficient-estimation technique that employs bootstrapping to estimate the Wilcoxon statistic variance for different numbers of readers and cases. We then employ constrained, least-squares fitting techniques to estimate the expansion coefficients. The constraints used in this fitting are derived directly from the probabilistic model. Results: Using two different simulation studies, we show that the novel (and practical) bootstrapping/fitting technique returns estimates of the coefficients that are consistent with the gold standard. Conclusion: The results presented also serve to validate the seven-term expansion for the variance of the Wilcoxon statistic. © 2006 AUR.
- Kupinski, M. A., Clarkson, E., & Barrett, H. H. (2006). A Probabilistic Model for the MRMC Method, Part 2: Validation and Applications. Academic Radiology, 13(Issue 11). doi:10.1016/j.acra.2006.07.015More infoRationale and Objectives: We have previously described a probabilistic model for the multiple-reader, multiple-case paradigm for receiver operating characteristic analysis. When the figure of merit is the Wilcoxon statistic, this model returns a seven-term expansion for the variance of this statistic as a function of the numbers of cases and readers. This probabilistic model also provides expressions for the coefficients in the seven-term expansion in terms of expectations over the internal noise, readers, and cases. Finally, this probabilistic model sets bounds on both the overall variance of the Wilcoxon statistic and the individual coefficients. Materials and Methods: In this article, we will first validate the probabilistic model by comparing variances determined by direct computation of the expansion coefficients to empirical estimates of the variance using independent sampling. Validation of the probabilistic model will enable us to use the direct estimates of the expansion coefficients as a gold standard to compare other coefficient-estimation techniques. Next, we develop a coefficient-estimation technique that employs bootstrapping to estimate the Wilcoxon statistic variance for different numbers of readers and cases. We then employ constrained, least-squares fitting techniques to estimate the expansion coefficients. The constraints used in this fitting are derived directly from the probabilistic model. Results: Using two different simulation studies, we show that the novel (and practical) bootstrapping/fitting technique returns estimates of the coefficients that are consistent with the gold standard. Conclusion: The results presented also serve to validate the seven-term expansion for the variance of the Wilcoxon statistic. © 2006 AUR.
- Kupinski, M. A., Clarkson, E., & Barrett, H. H. (2006). A probabilistic model for the MRMC method, part 2: Validation and applications. Academic radiology, 13(11), 1422--1430.
- Kupinski, M. A., Hoppin, J. W., Krasnow, J., Dahlberg, S., Leppo, J. A., King, M. A., Clarkson, E., & Barrett, H. H. (2006). Comparing cardiac ejection fraction estimation algorithms without a gold standard. Academic Radiology, 13(3), 329-337.More infoPMID: 16488845;PMCID: PMC2464280;Abstract: Rationale and Objectives. Imaging and estimation of left ventricular function have major diagnostic and prognostic importance in patients with coronary artery disease. It is vital that the method used to estimate cardiac ejection fraction (EF) allows the observer to best perform this task. To measure task-based performance, one must clearly define the task in question, the observer performing the task, and the patient population being imaged. In this report, the task is to accurately and precisely measure cardiac EF, and the observers are human-assisted computer algorithms that analyze the images and estimate cardiac EF. It is very difficult to measure the performance of an observer by using clinical data because estimation tasks typically lack a gold standard. A solution to this "no-gold-standard" problem recently was proposed, called regression without truth (RWT). Materials and Methods. Results of three different software packages used to analyze gated, cardiac, and nuclear medicine images, each of which uses a different algorithm to estimate a patient's cardiac EF, are compared. The three methods are the Emory method, Quantitative Gated Single-Photon Emission Computed Tomographic method, and the Wackers-Liu Circumferential Quantification method. The same set of images is used as input to each of the three algorithms. Data were analyzed from the three different algorithms by using RWT to determine which produces the best estimates of cardiac EF in terms of accuracy and precision. Results and Discussion. In performing this study, three different consistency checks were developed to ensure that the RWT method is working properly. The Emory method of estimating EF slightly outperformed the other two methods. In addition, the RWT method passed all three consistency checks, garnering confidence in the method and its application to clinical data. © AUR, 2006.
- Kupinski, M. A., Hoppin, J. W., Krasnow, J., Dahlberg, S., Leppo, J. A., King, M. A., Clarkson, E., & Barrett, H. H. (2006). Comparing cardiac ejection fraction estimation algorithms without a gold standard. Academic radiology, 13(3), 329--337.
- Park, S., Clarkson, E., Barrett, H. H., Kupinski, M. A., & Myers, K. J. (2006). Performance of a channelized-ideal observer using Laguerre-Gauss channels for detecting a Gaussian signal at a known location in different lumpy backgrounds. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 6146.More infoAbstract: The Bayesian ideal observer gives a measure for image quality since it uses all available statistical information for a given image data. A channelized-ideal observer (CIO), which reduces the dimensionality of integrals that need to be calculated for the ideal observer, has been introduced in the past. The goal of the CIO is to approximate the performance of the ideal observer in certain detection tasks. In this work, a CIO using Laguerre-Gauss (LG) channels is employed for detecting a rotationally symmetric Gaussian signal at a known location in the non-Gaussian distributed lumpy background. The mean number of lumps in the lumpy background is varied to see the impact of image statistics on the performance of this CIO and a channelized-Hotelling observer (CHO) using the same channels. The width parameter of LG channels is also varied to see its impact on observer performance. A Markov-chain Monte Carlo (MCMC) method is employed to determine the performance of the CIO using large numbers of LG channels. Simulation results show that the CIO is a better observer than the CHO for the task. The results also indicate that the performance of the CIO approaches that of the ideal observer as the mean number of lumps in the lumpy background decreases. This implies that LG channels may be efficient for the CIO to approximate the performance of the ideal observer in tasks using non-Gaussian distributed lumpy backgrounds.
- Sahu, A. K., Joshi, A., Kupinski, M. A., & Sevick-Muraca, E. M. (2006). Assessment of a fluorescence-enhanced optical imaging system using the Hotelling observer. Optics Express, 14(17), 7642-7660.More infoPMID: 19529133;PMCID: PMC2832206;Abstract: This study represents a first attempt to assess the detection capability of a fluorescence-enhanced optical imaging system as quantified by the Hotelling observer. The imaging system is simulated by the diffusion approximation of the time-dependent radiative transfer equation, which describes near infra-red (NIR) light propagation through a breast phantom of clinically relevant volume. Random structures in the background are introduced using a lumpy-object model as a representation of anatomical structure as well as non-uniform distribution of disease markers. The systematic errors and noise associated with the actual experimental conditions are incorporated into the simulated boundary measurements to acquire imaging data sets. A large number of imaging data sets is considered in order to perform Hotelling observer studies. We find that the signal-to-noise ratio (SNR) of Hotelling observer (i) decreases as the strength of lumpy perturbations in the background increases, (ii) decreases as the target depth increases, and (iii) increases as excitation light leakage decreases, and reaches a maximum for filter optical density values of 5 or higher. © 2006 Optical Society of America.
- Sahu, A. K., Joshi, A., Kupinski, M. A., & Sevick-Muraca, E. M. (2006). Assessment of a fluorescence-enhanced optical imaging system using the Hotelling observer. Optics express, 14(17), 7642--7660.
- Barrett, H. H., Kupinski, M. A., & Clarkson, E. (2005). Probabilistic foundations of the MRMC method. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 5749, 21-31.More infoAbstract: Current approaches to ROC analysis use the MRMC (multiple-reader, multiple-case) paradigm in which several readers read each case and their ratings are used to construct an estimate of the area under the ROC curve or some other ROC-related parameter. Standard practice is to decompose the parameter of interest according to a linear model into terms that depend in various ways on the readers, cases and modalities. It is assumed that the terms are statistically independent (or at least uncorrelated). Bootstrap methods are then used to estimate the variance of the estimate and the contributions from the individual terms in the assumed expansion. Though the methodological aspects of MRMC analysis have been studied in detail, the literature on the probabilistic basis of the individual terms is sparse. In particular, few papers state what probability law applies to each term and what underlying assumptions are needed for the assumed independence. This paper approaches the MRMC problem from a mechanistic perspective. For a single modality, three sources of randomness are included: the images, the reader skill and the reader uncertainty. The probability law on the parameter estimate is written in terms of three nested conditional probabilities, and random variables associated with this probability are referred to as triply stochastic. The triply stochastic probability is used to define the overall average of any ROC parameter as well as certain partial averages of utility in MRMC analysis. When this theory is applied to estimates of an ROC parameter for a single modality, it is shown that the variance of the estimate can be written as a sum of three terms, rather than the four that would be expected in MRMC analysis. The usual terms in MRMC expansions do not appear naturally in multiply-stochastic theory. A rigorous MRMC expansion can be constructed by adding and subtracting partial averages to the parameter of interest in a tautological manner. In this approach the parameter is decomposed into a sum of four random uncorrelated, zero-mean random variables, with each term clearly defined in terms of conditional probabilities. When the variance of the expansion is computed, however, numerous subtractions occur, and there is no apparent advantage to computing the variance term by term; the final result is the same as one gets from the triply stochastic decomposition, at least for the Wilcoxon estimator. No other nontrivial MRMC expansion appears to be possible.
- Gross, K. A., & Kupinski, M. A. (2005). SPECT image quality assessment and system parameter optimization for detection tasks. Frontiers in Optics. doi:10.1364/fio.2005.fthm2More infoThe assessment of image quality and the optimization of imaging-system parameters for detection tasks is discussed for simulated multiple-pinhole, multiple-camera, small-animal SPECT imaging systems.
- Gross, K., Kupinski, M. A., & Hesterman, J. Y. (2005). A fast model of a multiple-pinhole SPECT imaging system. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 5749, 118-127.More infoAbstract: The Center for Gamma-Ray Imaging is developing a number of small-animal SPECT imaging systems. These systems consist of multiple stationary detectors, each of which has its own multiple-pinhole collimator. The location of the pinhole plates (i.e., magnification), the number of pinholes within each plate, as well the pinhole locations are all adjustable. The performance of the Bayesian ideal observer sets the upper limit on task performance and can be used to optimize imaging hardware, such as pinhole configurations. Markov-chain Monte Carlo techniques have been developed to compute the ideal observer but require complete knowledge of the statistics of both the imaging system (such as the noise) and the class of random objects being imaged, in addition to an accurate forward model connecting the object to the image. Ideal observer computations using Monte Carlo techniques are burdensome because the forward model must be simulated millions of times for each imaging system. We present an efficient technique for computing the Bayesian ideal observer for multiple-pinhole, small-animal SPECT systems that accounts for both the finite-size of the pinholes and the stochastic nature of the objects being imaged. This technique relies on an efficient, radiometrically correct forward model that maps an object to an image in less than 20 milliseconds. An analysis of the error of the forward model, as well as the results of a ROC study using the ideal observer test statistic is presented.
- Hesterman, J. Y., Kupinski, M. A., Furenlid, L. R., & Wilson, D. W. (2005). Experimental task-based optimization of a four-camera variable-pinhole small-animal SPECT system. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 5749, 300-309.More infoAbstract: We have previously utilized lumpy object models and simulated imaging systems in conjunction with the ideal observer to compute figures of merit for hardware optimization. In this paper, we describe the development of methods and phantoms necessary to validate or experimentally carry out these optimizations. Our study was conducted on a four-camera small-animal SPECT system that employs interchangeable pinhole plates to operate under a variety of pinhole configurations and magnifications (representing optimizable system parameters). We developed a small-animal phantom capable of producing random backgrounds for each image sequence. The task chosen for the study was the detection of a 2mm diameter sphere within the phantom-generated random background. A total of 138 projection images were used, half of which included the signal. As our observer, we employed the channelized Hotelling observer (CHO) with Laguerre-Gauss channels. The signal-to-noise (SNR) of this observer was used to compare different system configurations. Results indicate agreement between experimental and simulated data with higher detectability rates found for multiple-camera, multiple-pinhole, and high-magnification systems, although it was found that mixtures of magnifications often outperform systems employing a single magnification. This work will serve as a basis for future studies pertaining to system hardware optimization.
- Kupinski, M. A., & Clarkson, E. (2005). Extending the channelized Hotelling observer to account for signal uncertainty and estimation tasks. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 5749, 183-190.More infoAbstract: In medicine, images are taken so that specific tasks can be performed. Thus, any measure of image quality must account for the task the images are to be used for and the observer performing the task. Performing task-based optimizations using human observers is generally difficult, time consuming, expensive and, in the case of hardware optimizations, not necessarily ideal. Model observers have been successfully used in place of human observers. The channelized Hotelling observer is one such model observer. Depending on the choice of channels, the channelized Hotelling observer can be used to either predict human-observer performance or as an ideal observer. This paper will focus on the use of the channelized Hotelling observer as an approximation of the ideal linear observer. Laguerre Gauss channels have proven useful for ideal-observer computations, but these channels are somewhat limited because they require the signal to be known exactly both in terms of location and shape. In fact, the Laguerre Gauss channels require the signal to be radially symmetric. We have devised a new method of determining efficient channels that does not require the signal to be symmetric and can even account for signal variability. This method can even be used for linear estimation tasks. We have compared the performances of the channelized Hotelling observer using both this new set of channels and the Laguerre Gauss channels for a signal-known-exactly detection task, and found that they correlate.
- Park, S., Clarkson, E., Kupinski, M. A., & Barrett, H. H. (2005). Efficiency of human and model observers for signal-detection tasks in non-Gaussian distributed lumpy backgrounds. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 5749, 138-149.More infoAbstract: Efficiencies of the human observer and channelized-Hotelling observers (CHOs) relative to the ideal observer for signal-detection tasks are discussed. A CHO using Laguerre-Gauss channels, which we call an efficient CHO (eCHO), and a CHO adding a scanning scheme to the eCHO to include signal-location uncertainty, which we call a scanning eCHO (seCHO), are considered. Both signal-known-exactly (SKE) tasks and signal-known-statistically (SKS) tasks are considered. Signal location is uncertain for the SKS tasks, and lumpy backgrounds are used for background uncertainty in both the tasks. Markov-chain Monte Carlo methods are employed to determine ideal-observer performance on the detection tasks. Psychophysical studies are conducted to compute human-observer performance on the same tasks. A maximum-likelihood estimation method is employed to fit smooth psychometric curves with observer performance measurements. Efficiency is computed as the squared ratio of the detectabilities of the observer of interest to a standard observer. Depending on image statistics, the ideal observer or the Hotelling observer is used as the standard observer. The results show that the eCHO performs poorly in detecting signals with location uncertainty and the seCHO performs only slightly better while the ideal observer outperforms the human observer and CHOs for both the tasks. Human efficiencies are approximately less than 2.5% and 41%, respectively, for the SKE and SKS tasks, where the gray levels of the lumpy background are non-Gaussian distributed. These results also imply that human observers are not affected by signal-location uncertainty as much as the ideal observer. However, for the SKE tasks using Gaussian-distributed lumpy backgrounds, the human efficiency ranges between 28% and 42%. Three different simplified pinhole imaging systems are simulated and the humans and the model observers rank the systems in the same order for both the tasks.
- Park, S., Clarkson, E., Kupinski, M. A., & Barrett, H. H. (2005). Efficiency of the human observer detecting random signals in random backgrounds. JOSA A, 22(1), 3--16.
- Park, S., Clarkson, E., Kupinski, M. A., & Barrett, H. H. (2005). Efficiency of the human observer detecting random signals in random backgrounds. Journal of the Optical Society of America A: Optics and Image Science, and Vision, 22(1), 3-16.More infoPMID: 15669610;PMCID: PMC2464287;Abstract: The efficiencies of the human observer and the channelized-Hotelling observer relative to the ideal observer for signal-detection tasks are discussed. Both signal-known-exactly (SKE) tasks and signal-known-statistically (SKS) tasks are considered. Signal location is uncertain for the SKS tasks, and lumpy backgrounds are used for background uncertainty in both cases. Markov chain Monte Carlo methods are employed to determine ideal-observer performance on the detection tasks. Psychophysical studies are conducted to compute human-observer performance on the same tasks. Efficiency is computed as the squared ratio of the detectabilities of the observer of interest to the ideal observer. Human efficiencies are approximately 2.1% and 24%. respectively, for the SKE and SKS tasks. The results imply that human observers are not affected as much as the ideal observer by signal-location uncertainty even though the ideal observer outperforms the human observer for both tasks. Three different simplified pinhole imaging systems are simulated, and the humans and the model observers rank the systems in the same order for both the SKE and the SKS tasks. © 2005 Optical Society of America.
- Edwards, D. C., Metz, C. E., & Kupinski, M. A. (2004). Ideal observers and optimal ROC hypersurfaces in N-class classification. IEEE Transactions on Medical Imaging, 23(7), 891-895.More infoPMID: 15250641;PMCID: PMC2464283;Abstract: The likelihood ratio, or ideal observer, decision rule is known to be optimal for two-class classification tasks in the sense that it maximizes expected utility (or, equivalently, minimizes the Bayes risk). Furthermore, using this decision rule yields a receiver operating characteristic (ROC) curve which is never above the ROC curve produced using any other decision rule, provided the observer's misclassification rate with respect to one of the two classes is chosen as the dependent variable for the curve (i.e., an "inversion" of the more common formulation in which the observer's true-positive fraction is plotted against its false-positive fraction). It is also known that for a decision task requiring classification of observations into N classes, optimal performance in the expected utility sense is obtained using a set of N - 1 likelihood ratios as decision variables. In the N-class extension of ROC analysis, the ideal observer performance is describable in terms of an (N2 - N - 1)-parameter hypersurface in an (N2 - N)-dimensional probability space. We show that the result for two classes holds in this case as well, namely that the ROC hypersurface obtained using the ideal observer decision rule is never above the ROC hypersurface obtained using any other decision rule (where in our formulation performance is given exclusively with respect to between-class error rates rather than within-class sensitivities).
- Edwards, D. C., Metz, C. E., & Kupinski, M. A. (2004). Ideal observers and optimal ROC hypersurfaces in N-class classification. Medical Imaging, IEEE Transactions on, 23(7), 891--895.
- Kupinski, M. A., & Clarkson, E. (2004). Image-quality assessment in optical tomography. 2004 2nd IEEE International Symposium on Biomedical Imaging: Macro to Nano, 2, 1471-1474.More infoAbstract: Modern medical imaging systems often rely on complicated hardware and sophisticated algorithms to produce useful digital images. It is essential that the imaging hardware and any reconstruction algorithms used are optimized, enabling radiologists to make the best decisions and quantify a patient's health status. Optimization of the hardware often entails determining the physical design of the system, such as the the locations of detectors in optical tomography or the design of the collimator in SPECT systems. For software or reconstruction algorithm optimization one is often determining the values of regularization parameters or the number of iterations in an iterative algorithm. In this paper, we present an overview of many approaches to measuring task performance as a means to optimize imaging systems and algorithms. Much of the work in this area has taken place in the areas of nuclear-medicine and x-ray imaging. The purpose of this paper is to present some of the task-based measures of image quality that are directly applicable to optical tomography. © 2004 IEEE.
- Park, S., Kupinski, M. A., Clarkson, E., & Barrett, H. H. (2004). Efficient channels for the ideal observer. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 5(26), 12-21.More infoAbstract: For a signal-detection task, the Bayesian ideal observer is optimal among all observers because it incorporates all the statistical information of the raw data from an imaging system. The ideal observer test statistic, the likelihood ratio, is difficult to compute when uncertainties are present in backgrounds and signals. In this work, we propose a new approximation technique to estimate the likelihood ratio. This technique is a dimensionality-reduction scheme we will call the channelized-ideal observer (CIO). We can reduce the high-dimensional integrals of the ideal observer to the low-dimensional integrals of the CIO by applying a set of channels to the data. Lumpy backgrounds and circularly symmetric Gaussian signals are used for simulations studies. Laguerre-Gaussian (LG) channels have been shown to be useful for approximating ideal linear observers with these backgrounds and signals. For this reason, we choose to use LG channels for our data. The concept of efficient channels is introduced to closely approximate ideal-observer performance with the CIO for signal-known-exactly (SKE) detection tasks. Preliminary results using one to three LG channels show that the performance of the CIO is better than the channelized-Hotelling observer for the SKE detection tasks.
- Clarkson, E., Kupinski, M. A., & Hoppin, J. W. (2003). Assessing the accuracy of estimates of the likelihood ratio. Proceedings of SPIE - The International Society for Optical Engineering, 5034, 135-143.More infoAbstract: There are many methods to estimate, from ensembles of signal-present and signal-absent images, the area under the receiver operating characteristic curve for an observer in a detection task. For the ideal observer on realistic detection tasks, all of these methods are time consuming due to the difficulty in calculating the ideal-observer test statistic. There are relations, in the form of equations and inequalities, that can be used to check these estimates by comparing them to other quantities that can also be estimated from the ensembles. This is especially useful for evaluating these estimates for any possible bias due to small sample sizes or errors in the calculation of the likelihood ratio. This idea is demonstrated with a simulation of an idealized single photon emission detector array viewing a possible signal in a two-dimensional lumpy activity distribution.
- Gross, K., Kupinski, M. A., Peterson, T., & Clarkson, E. (2003). Optimizing a multiple-pinhole spect system using the ideal observer. Proceedings of SPIE - The International Society for Optical Engineering, 5034, 314-322.More infoAbstract: In a pinhole imaging system, multiple pinholes are potentially beneficial since more radiation will arrive in the detector plane. However, the various images produced by each pinhole may multiplex (overlap), possibly decreasing image quality. In this work we develop the framework for comparing various pinhole configurations using ideal-observer performance as a figure of merit. We compute the ideal-observer test statistic, the likelihood ratio, using a statistical method known as Markov-Chain Monte Carlo. For different imaging systems, we estimate the likelihood ratio for many realizations of noisy image data both with and without a signal present. For each imaging system, the area under the ROC curve provides a meaningful figure of merit for hardware comparison. In this work we compare different pinhole configurations using a three-dimensional lumpy object model, a known signal (SKE), and simulated pinhole imaging systems. The results of our work will eventually serve as a basis for a design of high-resolution pinhole SPECT systems.
- Hoppin, J. W., Kupinski, M. A., Wilson, D. W., Peterson, T., Gershman, B., Kastis, G., Clarkson, E., Furenlid, L., & Barrett, H. H. (2003). Evaluating estimation techniques in medical imaging without a gold standard: Experimental validation. Proceedings of SPIE - The International Society for Optical Engineering, 5034, 230-237.More infoAbstract: Imaging is often used for the purpose of estimating the value of some parameter of interest. For example, a cardiologist may measure the ejection fraction (EF) of the heart to quantify how much blood is being pumped out of the heart on each stroke. In clinical practice, however, it is difficult to evaluate an estimation method because the gold standard is not known, e.g., a cardiologist does not know the true EF of a patient. An estimation method is typically evaluated by plotting its results against the results of another (more accepted) estimation method. This approach results in the use of one set of estimates as the pseudo-gold standard. We have developed a maximum-likelihood approach for comparing different estimation methods to the gold standard without the use of the gold standard. In previous works we have displayed the results of numerous simulation studies indicating the method can precisely and accurately estimate the parameters of a regression line without a gold standard, i.e., without the x-axis. In an attempt to further validate our method we have designed an experiment performing volume estimation using a physical phantom and two imaging systems (SPECT,CT).
- Kupinski, M. A. (2003). Computing in optics. Computing in Science and Engineering, 5(6), 13-14.
- Kupinski, M. A. (2003). Guest Editor's Introduction: Computing in Optics. Computing in Science \& Engineering, 5(6), 0013--14.
- Kupinski, M. A., Clarkson, E., Gross, K., & Hoppin, J. W. (2003). Optimizing imaging hardware for estimation tasks. Proceedings of SPIE - The International Society for Optical Engineering, 5034, 309-313.More infoAbstract: Medical imaging is often performed for the purpose of estimating a clinically relevant parameter. For example, cardiologists are interested in the cardiac ejection fraction, the fraction of blood pumped out of the left ventricle at the end of each heart cycle. Even when the primary task of the imaging system is tumor detection, physicians frequently want to estimate parameters of the tumor, e.g. size and location. For signal-detection tasks, we advocate that the performance of an ideal observer be employed as the figure of merit for optimizing medical imaging hardware. We have examined the use of the minimum variance of the ideal, unbiased estimator as a figure of merit for hardware optimization. The minimum variance of the ideal, unbiased estimator can be calculated using the Fisher information matrix. To account for both image noise and object variability, we used a statistical method known as Markov-chain Monte Carlo. We employed a lumpy object model and simulated imaging systems to compute our figures of merit. We have demonstrated the use of this method in comparing imaging systems for estimation tasks.
- Kupinski, M. A., Clarkson, E., Hoppin, J. W., Chen, L., & Barrett, H. H. (2003). Experimental determination of object statistics from noisy images. JOSA A, 20(3), 421--429.
- Kupinski, M. A., Clarkson, E., Hoppin, J. W., Chen, L., & Barrett, H. H. (2003). Experimental determination of object statistics from noisy images. Journal of the Optical Society of America A: Optics and Image Science, and Vision, 20(3), 421-429.More infoPMID: 12630828;PMCID: PMC1785324;Abstract: Modern imaging systems rely on complicated hardware and sophisticated image-processing methods to produce images. Owing to this complexity in the imaging chain, there are numerous variables in both the hardware and the software that need to be determined. We advocate a task-based approach to measuring and optimizing image quality in which one analyzes the ability of an observer to perform a task. Ideally, a task-based measure of image quality would account for all sources of variation in the imaging system, including object variability. Often, researchers ignore object variability even though it is known to have a large effect on task performance. The more accurate the statistical description of the objects, the more believable the task-based results will be. We have developed methods to fit statistical models of objects, using only noisy image data and a well-characterized imaging system. The results of these techniques could eventually be used to optimize both the hardware and the software components of imaging systems. © 2003 Optical Society of America.
- Kupinski, M. A., Hoppin, J. W., Clarkson, E., & Barrett, H. H. (2003). Ideal-observer computation in medical imaging with use of Markov-chain Monte Carlo techniques. JOSA A, 20(3), 430--438.
- Kupinski, M. A., Hoppin, J. W., Clarkson, E., & Barrett, H. H. (2003). Ideal-observer computation in medical imaging with use of Markov-chain Monte Carlo techniques. Journal of the Optical Society of America A: Optics and Image Science, and Vision, 20(3), 430-438.More infoPMID: 12630829;PMCID: PMC2464282;Abstract: The ideal observer sets an upper limit on the performance of an observer on a detection or classification task. The performance of the ideal observer can be used to optimize hardware components of imaging systems and also to determine another observer's relative performance in comparison with the best possible observer. The ideal observer employs complete knowledge of the statistics of the imaging system, including the noise and object variability. Thus computing the ideal observer for images (large-dimensional vectors) is burdensome without severely restricting the randomness in the imaging system, e.g., assuming a flat object. We present a method for computing the ideal-observer test statistic and performance by using Markov-chain Monte Carlo techniques when we have a well-characterized imaging system, knowledge of the noise statistics, and a stochastic object model. We demonstrate the method by comparing three different parallel-hole collimator imaging systems in simulation. © 2003 Optical Society of America.
- Kupinski, M. A., Hoppin, J. W., Clarkson, E., & Barrett, H. H. (2003). Ideal-observer computation in medical imaging with use of Markov-chain Monte Carlo techniques. Journal of the Optical Society of America A: Optics and Image Science, and Vision, 20(Issue 3). doi:10.1364/josaa.20.000430More infoThe ideal observer sets an upper limit on the performance of an observer on a detection or classification task. The performance of the ideal observer can be used to optimize hardware components of imaging systems and also to determine another observer's relative performance in comparison with the best possible observer. The ideal observer employs complete knowledge of the statistics of the imaging system, including the noise and object variability. Thus computing the ideal observer for images (large-dimensional vectors) is burdensome without severely restricting the randomness in the imaging system, e.g., assuming a flat object. We present a method for computing the ideal-observer test statistic and performance by using Markov-chain Monte Carlo techniques when we have a well-characterized imaging system, knowledge of the noise statistics, and a stochastic object model. We demonstrate the method by comparing three different parallel-hole collimator imaging systems in simulation. © 2003 Optical Society of America.
- Park, S., Kupinski, M. A., Clarkson, E., & Barrett, H. H. (2003). Ideal-observer performance under signal and background uncertainty. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2732, 342-353.More infoAbstract: We use the performance of the Bayesian ideal observer as a figure of merit for hardware optimization because this observer makes optimal use of signal-detection information. Due to the high dimensionality of certain integrals that need to be evaluated, it is difficult to compute the ideal observer test statistic, the likelihood ratio, when background variability is taken into account. Methods have been developed in our laboratory for performing this computation for fixed signals in random backgrounds. In this work, we extend these computational methods to compute the likelihood ratio in the case where both the backgrounds and the signals are random with known statistical properties. We are able to write the likelihood ratio as an integral over possible backgrounds and signals, and we have developed Markov-chain Monte Carlo (MCMC) techniques to estimate these high-dimensional integrals. We can use these results to quantify the degradation of the ideal-observer performance when signal uncertainties are present in addition to the randomness of the backgrounds. For background uncertainty, we use lumpy backgrounds. We present the performance of the ideal observer under various signal-uncertainty paradigms with different parameters of simulated parallel-hole collimator imaging systems. We are interested in any change in the rankings between different imaging systems under signal and background uncertainty compared to the background-uncertainty case. We also compare psychophysical studies to the performance of the ideal observer. © Springer-Verlag Berlin Heidelberg 2003.
- Park, S., Kupinski, M. A., Clarkson, E., & Barrett, H. H. (2003). Ideal-observer performance under signal and background uncertainty.. Inf Process Med Imaging, 18, 342-353.More infoPMID: 15344470;Abstract: We use the performance of the Bayesian ideal observer as a figure of merit for hardware optimization because this observer makes optimal use of signal-detection information. Due to the high dimensionality of certain integrals that need to be evaluated, it is difficult to compute the ideal observer test statistic, the likelihood ratio, when background variability is taken into account. Methods have been developed in our laboratory for performing this computation for fixed signals in random backgrounds. In this work, we extend these computational methods to compute the likelihood ratio in the case where both the backgrounds and the signals are random with known statistical properties. We are able to write the likelihood ratio as an integral over possible backgrounds and signals, and we have developed Markov-chain Monte Carlo (MCMC) techniques to estimate these high-dimensional integrals. We can use these results to quantify the degradation of the ideal-observer performance when signal uncertainties are present in addition to the randomness of the backgrounds. For background uncertainty, we use lumpy backgrounds. We present the performance of the ideal observer under various signal-uncertainty paradigms with different parameters of simulated parallel-hole collimator imaging systems. We are interested in any change in the rankings between different imaging systems under signal and background uncertainty compared to the background-uncertainty case. We also compare psychophysical studies to the performance of the ideal observer.
- Clarkson, E., Kupinski, M. A., & Barrett, H. H. (2002). Transformation of characteristic functionals through imaging systems. Optics Express, 10(13), 536-539.More infoPMID: 19436394;PMCID: PMC3143023;Abstract: We describe how to transfer the characteristic functional of an object model through a noisy, discrete imaging system to arrive at the characteristic function of the images. Our method can also incorporate linear post-processing of the images. © 2002 Optical Society of America.
- Clarkson, E., Kupinski, M., & Barrett, H. (2002). Transformation of characteristic functionals through imaging systems. Optics express, 10(13), 536--539.
- Drukker, K., Giger, M. L., Horsch, K., Kupinski, M. A., Vyborny, C. J., & Mendelson, E. B. (2002). Computerized lesion detection on breast ultrasound. Medical Physics, 29(7), 1438--1446.
- Drukker, K., Giger, M. L., Horsch, K., Kupinski, M. A., Vyborny, C. J., & Mendelson, E. B. (2002). Computerized lesion detection on breast ultrasound. Medical Physics, 29(7), 1438-1446.More infoPMID: 12148724;Abstract: We investigated the use of a radial gradient index (RGI) filtering technique to automatically detect lesions on breast ultrasound. After initial RGI filtering, a sensitivity of 87% at 0.76 false-positive detections per image was obtained on a database of 400 patients (757 images). Next, lesion candidates were segmented from the background by maximizing an average radial gradient (ARD) index for regions grown from the detected points. At an overlap of 0.4 with a radiologist lesion outline, 75% of the lesions were correctly detected. Subsequently, round robin analysis was used to assess the quality of the classification of lesion candidates into actual lesions and false-positives by a Bayesian neural network. The round robin analysis yielded an Az value of 0.84, and an overall performance by case of 94% sensitivity at 0.48 false-positives per image. Use of computerized analysis of breast sonograms may ultimately facilitate the use of sonography in breast cancer screening programs. © 2002 American Association of Physicists in Medicine.
- Edwards, D. C., Kupinski, M. A., Metz, C. E., & Nishikawa, R. M. (2002). Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model. Medical Physics, 29(12), 2861-2870.More infoPMID: 12512721;Abstract: We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of "candidate detections" as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well. © 2002 American Association of Physicists in Medicine.
- Edwards, D. C., Kupinski, M. A., Metz, C. E., & Nishikawa, R. M. (2002). Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model. Medical physics, 29(12), 2861--2870.
- Hoppin, J. W., Kupinski, M. A., Kastis, G. A., Clarkson, E., & Barrett, H. H. (2002). Objective comparison of quantitative imaging modalities without the use of a gold standard. IEEE Transactions on Medical Imaging, 21(5), 441-449.More infoPMID: 12071615;PMCID: PMC3150581;Abstract: Imaging is often used for the purpose of estimating the value of some parameter of interest. For example, a cardiologist may measure the ejection fraction (EF) of the heart in order to know how much blood is being pumped out of the heart on each stroke. In clinical practice, however, it is difficult to evaluate an estimation method because the gold standard is not known, e.g., a cardiologist does not know the true EF of a patient. Thus, researchers have often evaluated an estimation method by plotting its results against the results of another (more accepted) estimation method, which amounts to using one set of estimates as the pseudogold standard. In this paper, we present a maximum-likelihood approach for evaluating and comparing different estimation methods without the use of a gold standard with specific emphasis on the problem of evaluating EF estimation methods. Results of numerous simulation studies will be presented and indicate that the method can precisely and accurately estimate the parameters of a regression line without a gold standard, i.e., without the x axis.
- Hoppin, J. W., Kupinski, M. A., Kastis, G. A., Clarkson, E., & Barrett, H. H. (2002). Objective comparison of quantitative imaging modalities without the use of a gold standard. Medical Imaging, IEEE Transactions on, 21(5), 441--449.
- Kupinski, M. A., Clarkson, E., & Barrett, H. H. (2002). Matching statistical object models to real images. Proceedings of SPIE - The International Society for Optical Engineering, 4686, 37-42.More infoAbstract: We advocate a task-based approach to measuring and optimizing image quality; that is, optimize imaging systems based on the performance of a particular observer performing a specific task. This type of analysis can require numerous images and is, thus, infeasible with real patients. Researchers are forced to employ statistical models from which they can produce as many images as required. We have developed methods to accurately fit statistical models of continuous objects to real images. The fitted models can be used for hardware optimizations as well as image-processing optimizations. We have employed a continuous lumpy object model in this research and found that our method can accurately determine model parameters in simulation.
- Kupinski, M. A., Hoppin, J. W., Clarkson, E., Barrett, H. H., & Kastis, G. A. (2002). Estimation in medical imaging without a gold standard. Academic Radiology, 9(3), 290-297.More infoPMID: 11887945;PMCID: PMC3143018;Abstract: Rationale and Objectives. In medical imaging, physicians often estimate a parameter of interest (eg, cardiac ejection fraction) for a patient to assist in establishing a diagnosis. Many different estimation methods may exist, but rarely can one be considered a gold standard. Therefore, evaluation and comparison of different estimation methods are difficult. The purpose of this study was to examine a method of evaluating different estimation methods without use of a gold standard. Materials and Methods. This method is equivalent to fitting regression lines without the x axis. To use this method, multiple estimates of the clinical parameter of interest for each patient of a given population were needed. The authors assumed the statistical distribution for the true values of the clinical parameter of interest was a member of a given family of parameterized distributions. Furthermore, they assumed a statistical model relating the clinical parameter to the estimates of its value. Using these assumptions and observed data, they estimated the model parameters and the parameters characterizing the distribution of the clinical parameter. Results. The authors applied the method to simulated cardiac ejection fraction data with varying numbers of patients, numbers of modalities, and levels of noise. They also tested the method on both linear and nonlinear models and characterized the performance of this method compared to that of conventional regression analysis by using x-axis information. Results indicate that the method follows trends similar to that of conventional regression analysis as patients and noise vary, although conventional regression analysis outperforms the method presented because it uses the gold standard which the authors assume is unavailable. Conclusion. The method accurately estimates model parameters. These estimates can be used to rank the systems for a given estimation task. © AUR, 2002.
- Kupinski, M. A., Hoppin, J. W., Clarkson, E., Barrett, H. H., & Kastis, G. A. (2002). Estimation in medical imaging without a gold standard. Academic radiology, 9(3), 290--297.
- Liu, Z., Kastis, G. A., Stevenson, G. D., Barrett, H. H., Furenlid, L. R., Kupinski, M. A., Patton, D. D., & Wilson, D. W. (2002). BASIC SCIENCE INVESTIGATIONS-Quantitative Analysis of Acute Myocardial lnfarct in Rat Hearts with Ischemia-Reperfusion Using a High-Resolution Stationary SPECT System. Journal of Nuclear Medicine, 43(7), 933--939.
- Liu, Z., Kastis, G. A., Stevenson, G. D., Barrett, H. H., Furenlid, L. R., Kupinski, M. A., Patton, D. D., & Wilson, D. W. (2002). Quantitative analysis of acute myocardial infarct in rat hearts with ischemia-reperfusion using a high-resolution stationary SPECT system. Journal of Nuclear Medicine, 43(7), 933--939.
- Liu, Z., Kastis, G. A., Stevenson, G. D., Barrett, H. H., Furenlid, L. R., Kupinski, M. A., Patton, D. D., & Wilson, D. W. (2002). Quantitative analysis of acute myocardial infarct in rat hearts with ischemia-reperfusion using a high-resolution stationary SPECT system. Journal of Nuclear Medicine, 43(7), 933-939.More infoPMID: 12097466;PMCID: PMC3062997;Abstract: The purpose of this study was to develop an in vivo imaging protocol for a high-resolution stationary SPECT system, called FASTSPECT, in a rat heart model of ischemia-reperfusion (IR) and to compare 99mTc-sestamibi imaging and triphenyltetrazolium chloride (TTC) staining for reliability and accuracy in the measurement of myocardial infarcts. Methods: FASTSPECT consists of 24 modular cameras and a 24-pinhole aperture with 1.5-mm spatial resolution and 13.3 cps/μCi (0.359 cps/kBq) sensitivity. The IR heart model was created by ligating the left coronary artery for 90 min and then releasing the ligature for 30 min. Two hours after 99mTc-sestamibi injection (5-10 mCi [185-370 MBq]), images were acquired for 5-10 min for 5 control rats and 11 IR rats. The hearts were excised, and the left ventricle was sectioned into 4 slices for TTC staining. Results: Left and right ventricular myocardium in control rats was shown clearly, with uniform 99mTc-sestamibi distribution and 100% TTC staining for viable myocardium. Nine of 11 rats with IR survived throughout imaging and exhibited 50.8% ± 2.7% ischemic area and 37.9% ± 3.9% infarct in the left ventricle on TTC staining. The infarct size measured by FASTSPECT imaging was 37.6% ± 3.6%, which correlated significantly with that measured by TTC staining (r = 0.974; P < 0.01). Conclusion: The results confirmed the accuracy of FASTSPECT imaging for measurement of acute myocardial infarcts in rat hearts. Application of FASTSPECT imaging in small animals may be feasible for investigating myocardial IR injury and the effects of revascularization.
- Edwards, D. C., Papaioannou, J., Jiang, Y., Kupinski, M. A., & Nishikawa, R. M. (2001). Eliminating false-positive microcalcification clusters in a mammography CAD scheme using a Bayesian neural network. Proceedings of SPIE - The International Society for Optical Engineering, 4322(3), 1954-1960.More infoAbstract: We have applied a Bayesian neural network (BNN) to the task of distinguishing between true-positive (TP) and false-positive (FP) detected clusters in a computer-aided diagnosis (CAD) scheme for detecting clustered microcalcifications in mammograms. Because BNNs can approximate ideal observer decision functions given sufficient training data, this approach should have better performance than our previous FP cluster elimination methods. Eight cluster-based features were extracted from the TP and FP clusters detected by the scheme in a training dataset of 39 mammograms. This set of features was used to train a BNN with eight input nodes, five hidden nodes, and one output node. The trained BNN was tested on the TP and FP clusters detected by our scheme in an independent testing set of 50 mammograms. The BNN output was analyzed using ROC and FROC analysis. The detection scheme with the BNN for FP cluster elimination had substantially better cluster sensitivity at low FP rates (below 0.8 FP clusters per image) than the original detection scheme without the BNN. Our preliminary research shows that a BNN can improve the performance of our scheme for detecting clusters of microcalcifications.
- Edwards, D., Papaioannou, J., Jiang, Y., Kupinski, M., & Nishikawa, R. (2001). Eliminating false-positive microcalcification clusters in a mammography CAD scheme using a Bayesian neural network [4322-226]. PROCEEDINGS-SPIE THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, 1954--1960.
- Kupinski, M. A., Edwards, D. C., Giger, M. L., & Metz, C. E. (2001). Ideal observer approximation using Bayesian classification neural networks. IEEE Transactions on Medical Imaging, 20(9), 886-899.More infoPMID: 11585206;Abstract: It is well understood that the optimal classification decision variable is the likelihood ratio or any monotonic transformation of the likelihood ratio. An automated classifier which maps from an input space to one of the likelihood ratio family of decision variables is an optimal classifier or "ideal observer" Artificial neural networks (ANNs) are frequently used as classifiers for many problems. In the limit of large training sample sizes, an ANN approximates a mapping function which is a monotonic transformation of the likelihood ratio, i.e., it estimates an ideal observer decision variable. A principal disadvantage of conventional ANNs is the potential over-parameterization of the mapping function which results in a poor approximation of an optimal mapping function for smaller training samples. Recently, Bayesian methods have been applied to ANNs in order to regularize training to improve the robustness of the classifier. The goal of training a Bayesian ANN with finite sample sizes is, as with unlimited data, to approximate the ideal observer. We have evaluated the accuracy of Bayesian ANN models of ideal observer decision variables as a function of the number of hidden units used, the signal-to-noise ratio of the data and the number of features or dimensionality of the data. We show that when enough training data are present, excess hidden units do not substantially degrade the accuracy of Bayesian ANNs. However, the minimum number of hidden units required to best model the optimal mapping function varies with the complexity of the data.
- Kupinski, M. A., Edwards, D. C., Giger, M. L., & Metz, C. E. (2001). Ideal observer approximation using Bayesian classification neural networks. Medical Imaging, IEEE Transactions on, 20(9), 886--899.
- Edwards, D. C., Kupinski, M. A., Nishikawa, R. M., & Metz, C. E. (2000). Estimation of linear observer templates in the presence of multi-peaked Gaussian noise through 2AFC experiments. Proceedings of SPIE - The International Society for Optical Engineering, 3981, 85-96.More infoAbstract: We extend a method for linear template estimation developed by Abbey et al. which demonstrated that a linear observer template can be estimated effectively through a two-alternative forced choice (2AFC) experiment, assuming the noise in the images is Gaussian, or multivariate normal (MVN). We relax this assumption, allowing the noise in the images to be drawn from a weighted sum of MVN distributions, which we call a multi-peaked MVN (MPMVN) distribution. Our motivation is that more complicated probability density functions might be approximated in general by such MPMVN distributions. Our extension of Abbey et al.'s method requires us to impose the additional constraint that the covariance matrices of the component peaks of the signal-present noise distribution all be equal, and that the covariance matrices of the component peaks of the signal-absent noise distribution all be equal (but different in general from the signal-present covariance matrices). Preliminary research shows that our generalized method is capable of producing unbiased estimates of linear observer templates in the presence of MPMVN noise under the stated assumptions. We believe this extension represents a next step toward the general treatment of arbitrary image noise distributions.
- Edwards, D., Kupinski, M., Nagel, R., Nishikawa, R., & Papaioannou, J. (2000). Using a Bayesian neural network to optimally eliminate false-positive microcalcification detections in a CAD scheme. Digital Mammography, Medical Physics Publishing, Madison, 168--173.
- Esthappan, J., Kupinski, M. A., Lan, L., & Hoffmann, K. R. (2000). A method for the determination of the 3D orientations and positions of catheters from single-plane x-ray images. Annual International Conference of the IEEE Engineering in Medicine and Biology - Proceedings, 3, 2029-2032.More infoAbstract: The three-dimensional (3D) orientation and position of an object, i.e., a configuration of points, can be determined by use of the single projection technique (SPT) from a single projection image, given the relative 3D positions of the points and initial estimates of the orientation and position. The accuracy of the SPT for the case of L-shaped catheters was evaluated in simulation studies. Catheter models were generated, oriented and positioned, and then projected onto an image plane to generate projection images. Gaussian-distributed noise was added to the image positions. The 3D orientations and positions of the catheters were determined using the SPT, which iteratively aligns the model points with their respective image positions. Studies indicate that the orientation and position of a catheter of diameter 0.18 cm can be determined to within 1.6° and 0.8 cm, respectively. These results are comparable to those obtained with a J-shaped catheter indicating that the technique is generally applicable independent of catheter shape. Studies indicate that the SPT may provide the basis for the automated determination of the orientations of catheters in vivo from single-plane projection images. This automated method may facilitate interventional procedures by eliminating the need for imaging the vasculature at various angulations of the gantry, and may, thereby, reduce procedure times, complications, and radiation dose. In the future, the information provided by the SPT may be employed by 3D vessel reconstruction techniques to extend conventional roadmapping techniques from 2D to 3D.
- Giger, M. L., Huo, Z., Kupinski, M. A., & Vyborny, C. J. (2000). Computer-aided diagnosis in mammography. Handbook of medical imaging, 2, 915--1004.
- Kupinski, M. A., Anastasio, M. A., & Giger, M. L. (2000). Multiobjective genetic optimization of diagnostic classifiers used in the computerized detection of mass lesions in mammography. Proceedings of SPIE - The International Society for Optical Engineering, 3979, I/-.More infoAbstract: We have recently proposed and developed a multiobjective approach to training classification systems. In this approach, the objectives, i.e., the sensitivity and specificity, of a classifier are simultaneously optimized, resulting in a series of solutions that are equivalent in the absence of any a priori knowledge regarding the relative merits of the two objectives. These solutions form a receiver operating characteristic (ROC) curve that is, theoretically, the best possible ROC curve that can be obtained using the given classifier and given training dataset. We have applied this technique to the optimization of classifiers for the computerized detection of mass lesions in digitized mammograms. Comparisons will be made between the results obtained using the multiobjective approach and results obtained using more conventional approaches. We employed a database of 60 consecutive, non-palpable mass lesion cases. Features relating to the geometry, intensity, and gradients of the images were calculated for each visible lesion and for many false detections. Using a conventionally trained linear classifier we were able to achieve an Az of 0.84 while the multiobjective approach to training a linear classifier yielded an Az of 0.87 in the task of distinguishing between true lesions and false detections. Using a multiobjective approach to train a rule-based classifier with 5 thresholding rules resulted in an Az of 0.88 in the task of distinguishing between true lesions and false detections.
- Kupinski, M., Anastasio, M., & Giger, M. (2000). Multiobjective genetic optimization of diagnostic classifiers used in the computerized detection of mass lesions in mammography [3979-01]. PROCEEDINGS-SPIE THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, 40--45.
- Anastasio, M. A., Kupinski, M. A., Nishikawa, R. M., & Giger, M. L. (1999). Multiobjective approach to optimizing compterized detection schemes. IEEE Nuclear Science Symposium and Medical Imaging Conference, 3, 1879-1883.More infoAbstract: This work addresses a multiobjective approach optimizing computer-aided diagnosis (CAD) schemes. The multiobjective optimization problem admits a set of solutions, known as the Pareto-optimal set. The performances of the Pareto-optimal solutions can be interpreted as operating points on an optimal ROC or FROC curve, greater than or equal to the points on any possible ROC or FROC curve for a given dataset and given CAD classifier.
- Kupinski, M. A. (1999). Multiobjective genetic optimization of diagnostic classifiers with implications for generating receiver operating characteristic curves. IEEE Transactions on Medical Imaging, 18(8), 675-685.More infoPMID: 10534050;Abstract: It is well understood that binary classifiers have two implicit objective functions (sensitivity and specificity) describing their performance. Traditional methods of classifier training attempt to combine these two objective functions (or two analogous class performance measures) into one so that conventional scalar optimization techniques can be utilized. This involves incorporating a priori information into the aggregation method so that the resulting performance of the classifier is satisfactory for the task at hand. We have investigated the use of a niched Pareto multiobjective genetic algorithm (GA) for classifier optimization. With niched Pareto GA's, an objective vector is optimized instead of a scalar function, eliminating the need to aggregate classification objective functions. The niched Pareto GA returns a set of optimal solutions that are equivalent in the absence of any information regarding the preferences of the objectives. The a priori knowledge that was used for aggregating the objective functions in conventional classifier training can instead be applied post-optimization to select from one of the series of solutions returned from the multiobjective genetic optimization. We have applied this technique to train a linear classifier and an artificial neural network (ANN), using simulated datasets. The performances of the solutions returned from the multiobjective genetic optimization represent a series of optimal (sensitivity, specificity) pairs, which can be thought of as operating points on a receiver operating characteristic (ROC) curve. All possible ROC curves for a given dataset and classifier are less than or equal to the ROC curve generated by the niched Pareto genetic optimization. Diagnostic classifiers, genetic algorithms, multiobjective optimization, ROC analysis. © 1999 IEEE.
- Kupinski, M. A., & Anastasio, M. A. (1999). Multiobjective genetic optimization of diagnostic classifiers with implications for generating receiver operating characteristic curves. Medical Imaging, IEEE Transactions on, 18(8), 675--685.
- Kupinski, M. A., & Giger, M. L. (1999). Feature selection with limited datasets. Medical Physics, 26(10), 2176-2182.More infoPMID: 10535635;Abstract: Computer-aided diagnosis has the potential of increasing diagnostic accuracy by providing a second reading to radiologists. In many computerized schemes, numerous features can be extracted to describe suspect image regions. A subset of these features is then employed in a data classifier to determine whether the suspect region is abnormal or normal. Different subsets of features will, in general, result in different classification performances. A feature selection method is often used to determine an 'optimal' subset of features to use with a particular classifier. A classifier performance measure (such as the area under the receiver operating characteristic curve) must be incorporated into this feature selection process. With limited datasets, however, there is a distribution in the classifier performance measure for a given classifier and subset of features. In this paper, we investigate the variation in the selected subset of 'optimal' features as compared with the true optimal subset of features caused by this distribution of classifier performance. We consider examples in which the probability that the optimal subset of features is selected can be analytically computed. We show the dependence of this probability on the dataset sample size, the total number of features from which to select, the number of features selected, and the performance of the true optimal subset. Once a subset of features has been selected, the parameters of the data classifier must be determined. We show that, with limited datasets and/or a large number of features from which to choose, bias is introduced if the classifier parameters are determined using the same data that were employed to select the 'optimal' subset of features.
- Kupinski, M. A., & Giger, M. L. (1999). Feature selection with limited datasets. Medical physics, 26(10), 2176--2182.
- Anastasio, M. A., & Kupinski, M. A. (1998). Optimization and FROC analysis of rule-based detection schemes using a multiobjective approach. IEEE Transactions on Medical Imaging, 17(6), 1089-1093.More infoPMID: 10048867;Abstract: Computerized detection schemes have the potential of increasing diagnostic accuracy in medical imaging by alerting radiologists to lesions that they initially overlooked. These schemes typically employ multiple parameters such as threshold values or filter weights to arrive at a detection decision. In order for the system to have high performance, the values of these parameters need to be set optimally. Conventional optimization techniques are designed to optimize a scalar objective function. The task of optimizing the performance of a computerized detection scheme, however, is clearly a multiobjective problem: we wish to simultaneously improve the sensitivity and false-positive rate of the system. In this work we investigate a multiobjective approach to optimizing computerized rule-based detection schemes. In a multiobjective optimization, multiple objectives are simultaneously optimized, with the objective now being a vector-valued function. The multiobjective optimization problem admits a set of solutions, known as the Pareto-optimal set, which are equivalent in the absence of any information regarding the preferences of the objectives. The performances of the Pareto-optimal solutions can be interpreted as operating points on an optimal freeresponse receiver operating characteristic (FROC) curve, greater than or equal to the points on any possible FROC curve for a given dataset and detection scheme. It is demonstrated that generating FROC curves in this manner eliminates several known problems with conventional FROC curve generation techniques for rule-based detection schemes. We employ the multiobjective approach to optimize a rule-based scheme for clustered microcalcification detection that has been developed in our laboratory. © 1999 IEEE.
- Anastasio, M. A., & Kupinski, M. A. (1998). Optimization and FROC analysis of rule-based detection schemes using a multiobjective approach. IEEE Transactions on Medical Imaging, 17(Issue 6). doi:10.1109/42.746726More infoComputerized detection schemes have the potential of increasing diagnostic accuracy in medical imaging by alerting radiologists to lesions that they initially overlooked. These schemes typically employ multiple parameters such as threshold values or filter weights to arrive at a detection decision. In order for the system to have high performance, the values of these parameters need to be set optimally. Conventional optimization techniques are designed to optimize a scalar objective function. The task of optimizing the performance of a computerized detection scheme, however, is clearly a multiobjective problem: we wish to simultaneously improve the sensitivity and false-positive rate of the system. In this work we investigate a multiobjective approach to optimizing computerized rule-based detection schemes. In a multiobjective optimization, multiple objectives are simultaneously optimized, with the objective now being a vector-valued function. The multiobjective optimization problem admits a set of solutions, known as the Pareto-optimal set, which are equivalent in the absence of any information regarding the preferences of the objectives. The performances of the Pareto-optimal solutions can be interpreted as operating points on an optimal freeresponse receiver operating characteristic (FROC) curve, greater than or equal to the points on any possible FROC curve for a given dataset and detection scheme. It is demonstrated that generating FROC curves in this manner eliminates several known problems with conventional FROC curve generation techniques for rule-based detection schemes. We employ the multiobjective approach to optimize a rule-based scheme for clustered microcalcification detection that has been developed in our laboratory. © 1999 IEEE.
- Anastasio, M. A., Kupinski, M. A., & Nishikawa, R. M. (1998). Optimization and FROC analysis of rule-based detection schemes using a multiobjective approach. Medical Imaging, IEEE Transactions on, 17(6), 1089--1093.
- Anastasio, M. A., Kupinski, M. A., & Pan, X. (1998). New classes of reconstruction methods in reflection mode diffraction tomography. Proceedings of the IEEE Ultrasonics Symposium, 1, 839-842.More infoAbstract: Reflection mode diffraction tomography (DT) is an inversion scheme used to reconstruct the spatially variant refractive index distribution of a scattering object. We propose a linear strategy that makes use of the statistically complementary information inherent in the reflected scattered data to achieve a bias-free reduction of the image variance in two dimensional (2D) reflection mode DT. We derive infinite classes of estimation methods that can estimate the 2D Radon transform of the (band-pass filtered) scattering object function from the reflected scattered data. When the insonifying source is broadband we demonstrate that incorporation of the statistically complementary information generated by each frequency in the incident spectrum can further reduce the variance of the images reconstructed using different estimation methods.
- Anastasio, M. A., Kupinski, M. A., & Pan, X. (1998). Noise propagation in diffraction tomography: Comparison of conventional algorithms with a new reconstruction algorithm. Nuclear Science, IEEE Transactions on, 45(4), 2216--2223.
- Anastasio, M. A., Kupinski, M. A., & Pan, X. (1998). Noise propagation in diffraction tomography: comparison of conventional algorithms with a new reconstruction algorithm. IEEE Transactions on Nuclear Science, 45(4 PART 2), 2216-2223.More infoAbstract: In ultrasonic diffraction tomography, ultrasonic waves are used to probe the object of interest at various angles. The incident waves scatter when encountering inhomogeneities, unlike conventional X-ray CT. Theoretically, when the scattering inhomogeneities are considered weak, the scattering object can be reconstructed by algorithms developed from a generalized central slice theorem. We develop a hybrid algorithm for reconstruction of a scattering object by transforming the scattered data into a conventional X-ray-like sinogram thus allowing standard X-ray reconstruction algorithms, such as filtered back-projection, to be applied. We investigate the statistical properties of the filtered back-propagation, direct Fourier, and newly proposed hybrid reconstruction algorithms by performing analytical as well as numerical studies. © 1998 IEEE.
- Kupinski, M. A., & Giger, M. L. (1998). Automated seeded lesion segmentation on digital mammograms. IEEE Transactions on Medical Imaging, 17(4), 510-517.More infoPMID: 9845307;Abstract: Segmenting lesions is a vital step in many computerized mass-detection schemes for digital (or digitized) mammograms. We have developed two novel lesion segmentation techniques-one based on a single feature called the radial gradient index (RGI) and one based on simple probabilistic models to segment mass lesions, or other similar nodular structures, from surrounding background. In both methods a series of image partitions is created using gray-level information as well as prior knowledge of the shape of typical mass lesions. With the former method the partition that maximizes the RGI is selected. In the latter method, probability distributions for gray-levels inside and outside the partitions are estimated, and subsequently used to determine the probability that the image occurred for each given partition. The partition that maximizes this probability is selected as the final lesion partition (contour). We tested these methods against a conventional region growing algorithm using a database of biopsy-proven, malignant lesions and found that the new lesion segmentation algorithms more closely match radiologists' outlines of these lesions. At an overlap threshold of 0.30, gray level region growing correctly delineates 62% of the lesions in our database while the RGI and probabilistic segmentation algorithms correctly segment 92% and 96% of the lesions, respectively. © 1998 IEEE.
- Kupinski, M. A., & Giger, M. L. (1998). Automated seeded lesion segmentation on digital mammograms. Medical Imaging, IEEE Transactions on, 17(4), 510--517.
- Anastasio, M., Kupinski, M., & Pan, X. (1997). Noise properties of reconstructed images in ultrasonic diffraction tomography. IEEE Nuclear Science Symposium & Medical Imaging Conference, 2, 1561-1565.More infoAbstract: In ultrasonic diffraction tomography, ultrasonic waves are used to probe the object of interest at various angles. The incident waves scatter when encountering inhomogeneities, and thus do not travel in straight lines through the imaged object. When the scattering inhomogeneities are considered weak, the scattering object can be reconstructed by algorithms developed from a generalized central slice theorem. In this work, we develop a hybrid algorithm for reconstruction of a scattering object by transforming the measured scattered data into a conventional X-ray-like sinogram thus allowing standard X-ray reconstruction algorithms, such as filtered back-projection, to be applied. We systematically investigate and compare the statistical properties of three different algorithms: a direct Fourier inversion algorithm, the filtered back-propagation algorithm (which is analogous to the conventional filtered back-projection algorithm), and the newly developed hybrid algorithm. We derive analytical expressions for the variance of the noise in the reconstructed images and investigate the noise properties of the algorithms by performing extensive numerical simulations.
- Kupinski, M. A., & Giger, M. L. (1997). Feature selection and classifiers for the computerized detection of mass lesions in digital mammography. IEEE International Conference on Neural Networks - Conference Proceedings, 4, 2460-2463.More infoAbstract: We have investigated various methods of feature selection for two different data classifiers used in the computerized detection of mass lesions in digital mammograms. Numerous features were extracted from abnormal and normal breast regions from a database consisting of 210 individual mammograms. A stepwise method, a genetic algorithm and individual feature analysis were employed to select a subset of features to be used with linear discriminants. Similar techniques were also employed for an artificial neural network classifier. In both tests the genetic algorithm was able to either outperform or equal the performance of other methods.
- Kupinski, M. A., & Giger, M. L. (1997). Investigation of regularized neural networks for the computerized detection of mass lesions in digital mammograms. Annual International Conference of the IEEE Engineering in Medicine and Biology - Proceedings, 3, 1336-1339.More infoAbstract: Computerized schemes are currently being developed at the University of Chicago to detect mass lesions in digital mammograms. Artificial neural networks play an important role in the detection of masses. Currently, features are extracted from potential lesion areas and sent through a neural network to decide whether the area is to be called a true lesion or a false detection. One of the most difficult aspects of dealing with artificial neural networks is to train them without over-training; in other words, to take both the bias and variance into account when training. Typically, an early stopping technique is employed; that is, the neural network is tested on an independent data set and training is stopped when the performance on this independent data set is maximized. In this paper the effectiveness of regularization is evaluated as a technique to minimize over-training. Regularization adds an extra term to the cost-function used in neural network training that penalizes over-complex results. The results of simulation studies will be presented along with results obtained using data of actual lesions and false positives from our computerized mass detection scheme.
- Kupinski, M., Giger, M., Lu, P., & Huo, Z. (1995). Computerized detection of mammographic lesions: performance of artificial neural network with enhanced feature extraction [2434-95]. PROCEEDINGS-SPIE THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, 598--598.
Proceedings Publications
- Kupinski, M. A., Feng, Y., Ottensmeyer, M., Sabet, H., & Furenlid, L. R. (2023). Efficient Method for Generating System Response Functions using Monte Carlo Simulations. In IEEE NSS/MIC 2024.
- May, M., Miller, B. W., Kupinski, M. A., King, M. A., Kuo, P. H., & Furenlid, L. R. (2023). Experimental Characterization of Adaptive Multi-pinhole Apertures. In IEEE NSS/MIC 2023.More infoWe characterize a dynamic, multiple pinhole, single photon emission computed tomography (SPECT) aperture structure. These apertures are developed for a dedicated brain imaging system whose primary purpose is to acquire data for pharmacokinetic analysis. The apertures are capable of dynamically changing between imaging configurations corresponding to several different resolutions and sensitivities. We report on testing of the aperture performance at different emission energies. We expect to have more leakage from higher energy sources. Aperture assembly reliability is being tested by repeatedly moving apertures between different configurations to simulate acquisition cycles. This data is evaluated for repeated precision of aperture positioning. Data will also help optimize motor speed versus re-calibration frequency to run the system at its highest performance.
- Cronin, K. P., Cirignano, L., Kim, H., Squillante, M. R., Barber, H. B., Kupinski, M. A., & Furenlid, L. R. (2022). Characterization of TlBr Strip Detectors with Waveform Readouts. In 2022 IEEE Nuclear Science Symposium, Medical Imaging Conference, and Room Temperature Semiconductor Detector Conference, IEEE NSS MIC RTSD 2022.More infoSingle photon emission computed tomography (SPECT) is used to make a variety of clinical diagnoses. Recently there is an increased interest in SPECT systems with improved energy resolution to distinguish between multiple isotopes in alpha-emitting radio-nuclide therapy. Semiconductor detectors are ideal for this task as direct conversion of photon energy yields improved signal statistics. Both strip and pixel configurations can be optimized to meet spatial resolution requirements. Thallium bromide (TlBr) crystals have become of interest due to their high stopping power, large band-gap and ease of crystal purification. Since transport properties and stability has been improved it is timely to develop a complete understanding of TlBr and its best detector configurations. We have developed a time-dependent forward model for double-sided strip detectors. By sampling signal waveforms we have a tool for finding interaction depth and sub-pixel spatial resolution. Our work concludes by comparing simulation results to a TlBr 8x8 strip array developed by RMD. Our results show it may be possible to use "hit"strip waveforms to determine interaction depth. The waveform from the neighbor strips can contribute to sub-strip-pitch spatial resolution.
- Cronin, K. P., Kupinski, M. A., Barber, H. B., & Furenlid, L. R. (2022). Simulations and analysis of fluorescence effects in semiconductor x-ray and gamma-ray detectors. In Medical Imaging 2022: Physics of Medical Imaging, 12031.
- Doty, K. J., Kupinski, M. A., Richards, R. G., Ruiz-Gonzalez, M., King, M. A., Kuo, P. H., & Furenlid, L. R. (2022). Fiber Optic Plates as Light Guides for Flat and Curved Scintillation Detectors. In 2022 IEEE Nuclear Science Symposium, Medical Imaging Conference, and Room Temperature Semiconductor Detector Conference, IEEE NSS MIC RTSD 2022.More infoSPECT systems comprising arrays of modular scintillation cameras have long been recognized as an ideal approach for obtaining the dynamic data required for pharmacokinetic studies. As part of an effort to improve spatial resolution in scintillation detectors, we are investigating the use of fiber optic plates as a means to control the light spread in the light guide structure that transfers light from the scintillator exit surface to the light sensors. We have created a custom photon-transport code that handles the complexity of light propagation through fiber-optic cores as well as claddings to allow calculation of mean detector response functions (MDRFs) when combined with Monte Carlo simulations of gamma-ray interactions and lightsensor efficiency. We then invoke the Cramér-Rao Lower Bound derived from the Fisher-Information Matrix to estimate the intrinsic spatial resolution as a function of the optical design. We have simulated cameras with planar, cylindrically curved, and spherically curved scintillators and report on the efficacy of combining curved designs, which minimize parallax errors from pinhole projections, with fiber optic transfer plates that accomplish the transition from the curved scintillator to a planar array of light sensors. We assess the intrinsic 3D spatial resolution that can be achieved with maximum-likelihood (ML) estimation methods in these candidate modular camera designs, with the objective of arriving at a new version suitable for incorporation as the vertex camera in AdaptiSPECT-C, a project to develop a dedicated human brain imager.
- Doty, K. J., Kupinski, M. A., Richards, R. G., Ruiz-Gonzalez, M., King, M. A., Kuo, P. H., & Furenlid, L. R. (2022). Fisher information comparison between a monolithic and a fiber-optic light guide in a modular gamma camera. In Medical Imaging 2022: Physics of Medical Imaging, 12031.
- Feng, Y., Worstell, W., Kupinski, M., Hashemi, S. A., Soleymani, S., Furenlid, L. R., Ottensmeyer, M., & Sabet, H. (2022). Resolution Recovery by PSF Deconvolution on List Mode MLEM Reconstruction for Dynamic Cardiac SPECT System. In 2022 IEEE Nuclear Science Symposium, Medical Imaging Conference, and Room Temperature Semiconductor Detector Conference, IEEE NSS MIC RTSD 2022.More infoThis work presents improvement in imaging resolution by implementing spatial variant point spread function (SV-PSF) with list mode MLEM reconstruction for the DC-SPECT system being developed at MGH, Harvard. Results show that with PSF modeling applied, quality of the reconstructed image is improved, and DC-SPECT system can achieve a 4.5 mm central spatial resolution with average 795 counts/(s*Mbq). The results show substantial improvement over the gold standard GE Discovery 570c performance (spatial resolution 7 mm with an average 460 counts/s*MBq, central resolution 5.8 mm).
- Furenlid, L. R., May, M., Kupinski, M. A., Feng, Y., Worstell, W., Ottensmeyer, M., & Sabet, H. (2022). Comparison of Printed versus Machined Tungsten Pyramidal Collimators. In 2022 IEEE Nuclear Science Symposium, Medical Imaging Conference, and Room Temperature Semiconductor Detector Conference, IEEE NSS MIC RTSD 2022.More infoAdvances in additive manufacturing techniques are creating new opportunities for collimator design in nuclear medicine. In this work we assess the relative merits of fabricating aperture components via printing with tungsten powder versus machining with tungsten alloys. We report on experiences with a variety of metal printing approaches and describe the ranges of material density we have achieved, as well as the figure and finish of raw and polished parts.We also report on the direct comparison of pyramid-shaped collimators, for a dedicated cardiac SPECT system being developed at Massachusetts General Hospital, that we fabricated via both approaches. Illumination with 99mTc point sources is used to create shadow images on an iQID imaging station from which general leakage and pinhole-edge penetration could be experimentally determined.We find that the flexibility in design, for example the ability to easily create skewed clearance cones in arrays of focused pinholes with printed fabrication, must be balanced against higher material density and uniformity in machined tungsten alloy parts. However, as new printing and machining methods are being introduced, the physical tradeoffs are likely to become less significant.
- Kupinski, M. A., Anderson, O., Furenlid, L., Worstell, W., Feng, Y., & Sabet, H. (2022). Development of Collision-Detection Methods for a 5-axis Gamma-Camera Calibration System. In 2022 IEEE Nuclear Science Symposium, Medical Imaging Conference, and Room Temperature Semiconductor Detector Conference, IEEE NSS MIC RTSD 2022.More infoA pinhole-based cardiac SPECT imaging system is currently being developed at Massachusetts General Hospital/Harvard Medical School that utilizes scintillation crystals that have laser-induced optical barriers to guide the light based on the position and orientation of each pinhole. This unique crystal geometry presents a calibration challenge as the calibration of the incident gamma-rays must be both position and orientation controlled. We have previously developed and presented a 5-axis calibration system to account for the 3 linear axes (x, y, and z) as well as two orientation axes (theta and phi) that utilize a collimated source of gamma-rays to calibrate such segmented crystal camera systems. However, one of the challenges of such a system is to move the stages in such a way as to avoid collisions with the crystal face. Such collisions could likely damage the crystal or the acquisition system itself. We have developed a collision-detection system that utilizes the accurate 3D CAD models of the calibration-state system to precisely determine when a collision is going to occur. Thus, potential stage motions can be vetted through the collision model to determine if any collisions will occur or not and corrections can be made to the motion profiles to avoid collisions with sensitive hardware components.
- Kupinski, M. A., Clarkson, E. W., Cronin, K. P., Woolfenden, J. M., Humm, J. L., & Furenlid, L. R. (2022). Observer performance in multi-technology imaging. In Medical Imaging 2022: Image Perception, Observer Performance, and Technology Assessment.
- May, M., Ruiz-Gonzalez, M., Richards, R. G., King, M. A., Kupinski, M. A., Kuo, P. K., & Furenlid, L. R. (2022). Analysis of Adaptive Multi-Pinhole Aperture Plates for Brain SPECT Imaging. In 2022 IEEE Nuclear Science Symposium, Medical Imaging Conference, and Room Temperature Semiconductor Detector Conference, IEEE NSS MIC RTSD 2022.More infoWe are developing AdaptiSPECT-C, a dynamic multiple pinhole, single-photon emission computed tomography (SPECT) system dedicated to whole brain imaging whose primary purpose is to acquire data for pharmacokinetic analysis. We have designed prototype hardware, control electronics and software that create adjustable pinhole apertures for this clinical SPECT system. The design enables us to dynamically move between imaging configurations corresponding to a variety of desired resolutions and sensitivities. Preliminary testing is necessary to evaluate the aperture assembly performance and reliability. Modular aperture plates are designed to provide multiple choices for the trade-offs between resolution and sensitivity, in order to improve pharmacokinetic SPECT imaging. Each camera is outfitted with 5 adaptive pinholes that can be adjusted to different diameters. These apertures are designed to respond to evolving biodistributions of radiotracers. We present the results of testing the operation of the aperture plate system. We captured projections from a variety pinhole imaging configurations with a point source object.
- Ruiz-Gonzalez, M., Garrett Richards, R., Doty, K. J., Kupinski, M. A., Kuo, P. H., King, M. A., & Furenlid, L. R. (2022). Testing and Calibration Methods for Hybrid PMT/SiPM Gamma-Ray Camera Read-Out Electronics. In 2022 IEEE Nuclear Science Symposium, Medical Imaging Conference, and Room Temperature Semiconductor Detector Conference, IEEE NSS MIC RTSD 2022.More infoAdaptiSPECT-C is a human brain single-photon emission computed tomography (SPECT) system with 25 stationary scintillation gamma-ray cameras. Each camera has a dedicated read-out electronics system. The back-end electronics consist of a main board which includes, among other features, slots for a system-on-a-module (SoM) and slots for three A/D modules. Each A/D module consists of 28 1-bit sigma-delta modulators (SDMs). Each camera has 25 photomultiplier tube (PMT) channels and 56 silicon photomultiplier (SiPM) channels. Two A/D modules are utilized for the SiPM channels, and one for the PMT channels. We have designed an A/D module tester board for debugging, testing, and calibrating the A/D channels. The tester board takes an input signal from a waveform generator and creates 28 identical copies of the signal. The 28 generated signals are sent to the input of the SDM channels. For rapid testing, the SDM threshold for all channels is controlled by a potentiometer and the DC offset by another potentiometer. The output modulation is sent to bicolor LEDs, where the combined color of the LED pair depends on the SDM rate, which allows us to visually detect modulation variations. The output of the SDM comparator is fed back to the SDM input directly, so that no digital logic is needed for testing. The tester board includes a connector that outputs the 28 generated identical signals to be used as signal input for the main back-end board. This testing method offers a convenient and feasible process to rapidly test, debug, and calibrate the more than 2000 A/D channels in the AdaptiSPECT-C system.
- Ruiz-Gonzalez, M., Richards, R. G., Doty, K. J., Kuo, P. H., Kupinski, M. A., Furenlid, L. R., & King, M. A. (2022). A read-out strategy for high-resolution large-area SiPM-based modular gamma-ray cameras. In Medical Imaging 2022: Physics of Medical Imaging, 12031.
- Auer, B., De Beenhouwer, J., Kalluri, K. S., Lindsay, C., Richards, R. G., May, M., Kupinski, M. A., Kuo, P. H., Furenlid, L. R., & King, M. A. (2021). Evaluation of Down-scatter Contamination in Multi-Pinhole 123I-IMP Brain Perfusion SPECT Imaging. In 2021 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2021.More infoBrain imaging with 123I radionuclide remains essential to assess the dopamine transporter activity or cerebral blood flow in various cerebral disorders. However, imaging with 123I-labeled tracers suffers from down-scatter contaminations from the emission of a series of high-energy (>183 keV, ~3% abundance) gamma photons in addition to the primary photons (159 keV, 83% abundance). In this work, we investigated through simulation studies the effect of down-scatter contamination on image quality using multiple pinhole configurations and aperture sizes of AdaptiSPECT-C, which is a next-generation multi-pinhole system currently under construction. We simulated a brain phantom with source distribution for the perfusion imaging agent 123I-IMP as imaged 1h post injection. To enable comparison of imaging without down-scatter interactions, reconstructions were compared qualitatively and quantitively to the ones obtained from acquisition of similar activity distribution simulated for solely the 159-keV principal emission of 123I. In this initial study, we demonstrated through quantification and visual inspection of cerebral perfusion reconstruction incorporating down-scatter correction that the inclusion of down-scatter counts does not hamper the imaging performance of AdaptiSPECT-C even for the pinhole combination the most contaminated by such interactions. We have initiated a comparison of these findings against the ones obtained from a dual-head system employing parallel-hole collimator for which acquisition is considerably more impacted by down-scatter interactions.
- Auer, B., Furenlid, L. R., Kalluri, K. S., King, M. A., Kuo, P. H., Kupinski, M. A., May, M., Richards, R. G., Ruiz-Gonzalez, M., & Sawyer, L. (2021). A Dynamic Pinhole Aperture Control System. In 2021 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2021.More infoWe have designed prototype control electronics and software responsible for moving adjustable pinhole apertures. With this design we can create various imaging configurations corresponding to desired resolutions and sensitivities in a large imaging system. AdaptiSPECT-C is a dynamic whole-brain imager composed of 24 cameras, each outfitted with five apertures. This system can be adjusted in real-time as part of the data acquisition method for dynamic studies to account for changes in the source activity as a function of time. The control electronics for these apertures are designed to operate a single camera in the assembly and each camera can function independently. The modular design allows movement of multiple apertures for fast setup times and storage of preset configuration for ease of repeatability. The software also handles user input for troubleshooting.
- Auer, B., Kalluri, K. S., Lindsay, C., De Beenhouwer, J., Garrett Richards, R., May, M., Kupinski, M. A., Kuo, P. H., Furenlid, L. R., & King, M. A. (2021). Imaging Performance of AdaptiSPECT-C for 99mTc/123I Single- and Dual-Isotope imaging. In 2021 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2021.More infoWe have designed for cerebral imaging a next-generation multi-pinhole system, AdaptiSPECT-C. This prototype incorporates 24 detector modules and can provide a total of up to 120 pinholes in multiple combinations around the brain. The system can adapt each aperture to one of three sizes and whether it is open or closed via a shutter mechanism. AdaptiSPECT-C provides high-performance patient-personalized imaging in various brain imaging procedures. In this work we initially determined the pinhole size and combination of AdaptiSPECT-C for optimal single- and dual-isotope 99mTc-HMPAO perfusion/123I-Ioflupane DaT imaging. We found through quantification and visual inspection that the medium aperture size in a central plus obliques pinhole combination for the adaptable AdaptiSPECT-C system is the most suited for high-performance single- and dual-isotope perfusion and DaT imaging. We observed that the use of the largest pinhole size degrades visually and in terms of NRMSE and %SBR, the reconstructions despite higher sensitivity due to lower spatial resolution compared to that of the medium aperture diameter. For simultaneous imaging, we investigated an extreme case for which the two symmetric photopeak windows slightly overlap with each other and the crosstalk contamination (∼20% and ∼10% for the 99mTc and 123I photopeaks) was not corrected. Our initial results suggest that the reconstructions obtained from the dual-isotope study remain close quantitively and visually to that of the single-isotope simulations. We observed that the obliques or obliques plus central combination is more affected by crosstalk contamination than the central-only configuration.
- Doty, K. J., Furenlid, L. R., King, M. A., Kuo, P. H., Kupinski, M. A., Richards, R. G., & Ruiz-Gonzalez, M. (2021). Prototype Read-Out and Control Electronics for a Hybrid PMT/SiPM Gamma-Ray Camera. In 2021 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2021.More infoWe present the implementation and results obtained with a prototype read-out and control board designed to test the electronics for the 24 AdaptiSPECT-C 81-channel hybrid PMT/SiPM modular gamma-ray cameras. The prototype board is utilized to test, optimize, and characterize the 1-bit and non-uniform 2-bit sigma-delta modulators (SDMs) used for the A/D conversion of light sensor signals, to determine optimal parameters for the camera temperature controller, and to develop firmware for the on-board FPGA/processor that manages the temperature sensors and controller, 32-channel IC DACs, PMT power supplies, digital side of SDMs, network server functions, communication with adaptive aperture microcontroller, and data transfer to the main acquisition computer.
- Fan, J., & Kupinski, M. A. (2021). Observer models utilizing compressed textures. In Medical Imaging 2021: Image Perception, Observer Performance, and Technology Assessment, 11599.More infoWe have previously presented a method for sorting textures based on whether they obscure a signal, and thus hinder the ability of an observer to perform a signal-detection task, or whether the presence of certain textures can be easily ignored by the observer, and thus do little to impede performance. This analysis has led to a surrogate figure of merit that was demonstrated to correlate with human-observer performance as measured by the channelized Hotelling observer. In this work, we generalize our previous results to include more tasks including estimation and combined detection/estimation tasks. We demonstrate the ability of this method to determine the textures present in a set of images that are the most detrimental to the specified task. We further devise alternative surrogate figures of merit can utilize this texture-compression method as a mechanism for generating channels for observer-performance computations.
- Richards, R. G., Ruiz-Gonzalez, M., Doty, K. J., Auer, B., Kupinski, M. A., King, M. A., Kuo, P. H., & Furenlid, L. R. (2021). Integration and Testing of the Hybrid Gamma Cameras for AdaptiSPECT-C. In 2021 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2021.More infoRecent advances in scintillation light sensors are enabling new designs for single-photon-emission computed tomography (SPECT) cameras for use in nuclear medicine, achieving higher sensitivity and spatial resolution than previously possible. One such camera is AdaptiSPECT-C: a new stationary and adaptive SPECT system designed for clinical whole-brain imaging. It employs a 24-stationary-camera configuration in conjunction with multi-pinhole apertures, significantly increasing its overall sensitivity to gamma radiation. Each of the five pinholes assigned to a camera can actively select among several different aperture diameters and a shuttered state; the latter function enabling the acquisition of both non-multiplexed and multiplexed image data in a single session.Each 18-cm-square camera in AdaptiSPECT-C houses 25 photomultiplier tubes (PMTs) and 24 multi-pixel photon counters (MPPCs). This combination results in a camera that achieves uniformly high spatial resolution across the entire crystal area. In this work we present the combined MPPC/PMT hybrid camera design, prioritizing modularity and space efficiency. We also cover schematics for the front-end electronics and their hardware integration, specifically highlighting initial test pulses from the two sensor types. The results of said testing show promise for the eventual merging of hybrid signal information into a comprehensive dataset during real world image acquisitions. Finally, we present preliminary calibration results and compare to predictions based on light-transport simulations.
- Auer, B., Kalluri, K. S., Abayazeed, A. H., de Beenhouwer, J., Zeraatkar, N., Lindsay, C., Momsen, N. C., Richards, R. G., May, M., Kupinski, M. A., Kuo, P. H., Furenlid, L. R., & King, M. A. (2020). Aperture Size Selection for Improved Brain Tumor Detection and Quantification in Multi-Pinhole 123I-CLINDE SPECT Imaging. In 2020 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2020.More infoA next-generation multi-pinhole system dedicated to brain SPECT imaging is being constructed by our research team, which we call AdaptiSPECT-C. The system prototype used herein consists of 25 square detector modules and a total of 100 apertures grouped by 4 per module. The system is specifically designed for multi-purpose brain imaging and capable of adapting in real-time each aperture size and whether it is open or shuttered closed. The use of such system would provide optimum high-performance patient-personalized imaging for a wide range of brain imaging tasks. In this work we investigated the effect of pinhole diameter variation on spherical tumor quantification for the promising brain tumor imaging agent 123I-CLINDE. To assess the quality of the images reconstructed for the different aperture sizes, we used a customized multiple-sphere tumor phantom derived from the XCAT software with a tumor size of 1 cm in diameter. Our results suggest through quantification and visual inspection that an aperture diameter in the range of 2 to 5 mm in diameter for the adaptive AdaptiSPECT-C system is likely the most suited for high performance brain tumor 123I-CLINDE imaging. In addition, our study concludes that a 4 mm pinhole diameter given its excellent spatial-resolution-to-sensitivity trade-off is promising for scout acquisition in localizing target tumor regions within the brain. We have initiated a task-based performance on the tumor detection and localization accuracy for a range of simulated tumor sizes using the channelized non-pre-whitening (CNPW) matched-filter scanning-observer.
- Cronin, K. P., Kupinski, M. A., Woolfenden, J. M., Yabu, G., Kawamura, T., Takeda, S., Takahashi, T., & Furenlid, L. R. (2020). Design of a Multi-technology Pre-clinical SPECT System. In 2020 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2020.More infoAll imaging techniques have fundamental trade-offs as a consequence of the physics that govern the image-forming technique in combination with limitations imposed by the detector technology. In SPECT systems that trade-off is between energy resolution, spatial resolution, field of view, and sensitivity. SPECT detectors would ideally have large area and stopping power, excellent energy and spatial resolution, as well as high count-rate capability. To date, no single detector combines all of these attributes. Nor is there a single collimation strategy that is effective under all circumstances. In prior theory work, we have shown that image quality, as defined by objective task performance measures, can in principle be improved by combining multiple detector and collimator strategies in the same system [3]. In this work, we present a design for a pre-clinical imager combining an intensified quantum imaging detector (iQID) and a CdTe crossed-strip semiconductor detector. The iQID scintillation detector can achieve excellent spatial resolution while also delivering high sensitivity, but with limited energy resolution. This compliments the semiconductor detector's ability to achieve excellent energy and spatial resolution, but at limited count rates and with a smaller detector area. By jointly reconstructing data sets acquired concurrently, we seek to produce a SPECT system that has high energy and spatial resolution without sacrificing sensitivity or field of view. In this work we present the design considerations in building this multi-technology SPECT system.
- Doty, K. J., Li, X., Richards, R. G., King, M. A., Kuo, P. H., Kupinski, M. A., & Furenlid, L. R. (2020). Modular Camera Design Study for Human Brain SPECT System. In 2020 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2020.More infoSingle-photon emission computed tomography (SPECT) can be used with a wide variety of radioligands for drug discovery and pharmacokinetic studies of promising drugs for neurodegenerative diseases. We are developing a human brain SPECT system with a stationary array of detectors that will provide dynamic high-resolution, high-sensitivity imaging. We are assessing the benefits of incorporating cylindrically curved scintillation detectors, which - due primarily to significant reduction in depth of interaction uncertainty - have resolution advantages over planar detectors at the edges. We are studying the use of a cylindrically curved to planar fiber optic plate to transfer the scintillation light from the curved crystal and light guide to a planar surface for photodetection using conventional methods. Another design component being evaluated is a novel light-sensor configuration combining photomultiplier tubes (PMTs) and silicon photomultipliers (SiPMs). Simulation methods were used to predict performance of a variety of detector layouts. The purpose of the study was to balance the tradeoff between detector cost and performance, as the final imager will be comprised of 24 camera modules. We demonstrate that combining PMTs and SiPMs for electronic readout achieves a spatial resolution advantage at the edges while maintaining a lower cost than a full SiPM readout or a curved detector.
- Kupinski, M. A., Garrett, Z., & Fan, J. (2020). Observer-driven texture analysis in CT imaging. In Medical Imaging 2020: Image Perception, Observer Performance, and Technology Assessment, 11316.More infoWe have implemented a technique for analyzing and characterizing the textures in medical images. This technique generates a list of characteristic textures and sorts them from most important to least important for the task of detecting a specific signal in the image. The effects of the human-visual system can be incorporated into this method through the use of an eye filter. The final set of sorted textures can be quickly utilized to analyze new sets of images and make comparison regarding performance on the same task. This analysis is based upon whether the new set of images contains textures that are similar or dissimilar to that of the original set of images. We present the method for analyzing and sorting textures based on how well signals can be distinguished. We also discuss the importance of the most "obscuring" textures that make signal-detection difficult. Results and comparisons of task performance are presented.
- Clarkson, E., Cronin, K. P., Furenlid, L. R., Humm, J. L., Kupinski, M. A., & Woolfenden, J. M. (2019). Assessment of SPECT Systems Using Multiple Detector Technologies. In 2019 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC).More infoConventional SPECT systems, either rotating or stationary, are typically outfitted with a single type of detectors. The use of identical detectors simplifies design and reconstruction in these systems. However, when a single detector technology is used, all detectors suffer the same limitations, and image quality can only be improved through additional angular sampling, better collimation, or more optimal injection protocols. In this paper, we analyze the concept of utilizing two or more detector technologies during the same acquisition, and the potential impact on image quality of exploiting the benefits of each respective technology. There is always a tradeoff in designs between energy resolution, spatial resolution, sensitivity and count rate. A combination of SPECT technologies in a single system could reduce these limitations for a desired application. We have modeled a SPECT system with multiple detector technologies to compare it to systems with a single SPECT technology, but the same number of detectors. An analysis framework has been developed to explore the fundamental gains in performance that can be achieved when using multiple technologies and to study the implementation of image reconstruction with these datasets.
- Goding, J. C., Auer, B., Zeraatkar, N., Kupinski, M. A., Furenlid, L. R., & King, M. A. (2018). A Fourier Crosstalk Analysis of a Brain SPECT Imaging System-Initial Results. In 2018 IEEE Nuclear Science Symposium and Medical Imaging Conference Proceedings (NSS/MIC).More infoA joint research program at the universities of Arizona and Massachusetts (Medical school) is designing a next-generation, adaptive, dedicated brain-imaging, single photon emission computed tomography (SPECT) system. It consists of multi-pinhole, modular gamma cameras arrayed around a hemisphere that encloses the volume of interest. The adaptive feature of the system stems from the capability of shuttering selected pinholes for each module. Critical to the design process is the ability to vary parameters such as the number, location and size of the pinholes and quantify their effects on system performance. One approach is to use the Fourier-crosstalk matrix (FXM) which is derived from the system matrix and can provide a modulation transfer function (MTF)-type measure of system resolution. An FXM-based study has been initiated to answer some of the questions regarding pinhole/detector placement in the proposed SPECT system. The initial work has entailed obtaining system matrices for individual detector modules and then deriving the FXM from it (via a multidimensional discrete Fourier transform (DFT)).
- Kupinski, M. A., Furenlid, L. R., Li, X., & Lin, A. (2018). GPU-Assisted Generation of System Matrices for High-Resolution Imaging Systems. In 2018 IEEE Nuclear Science Symposium and Medical Imaging Conference Proceedings (NSS/MIC).More infoTraditional methods of point spread function (PSF) modeling of pinhole SPECT systems, such as fitting the PSF with a 2D Gaussian, are generally sufficient in characterizing imaging systems in which the detector is the main source of PSF blur. However, when modeling the PSF of a high-resolution imaging system, these methods are too simplistic and fail to capture important PSF features, resulting in errors in later simulation studies. In this work, we present a method for parameterizing point spread functions that is then used to rapidly generate system matrices of gamma-ray imaging systems using high-resolution detectors with sub-millimeter spatial resolution. Our algorithm, which utilizes a GPU-based ray tracer to simulate a system’s true PSF, accounts for the PSF blur due to not only the detector’s depth of interaction, but also the penetration through the pinhole and the finite size of of the source. By considering all three blurring sources, we are able to better model the PSF, capturing features like the flat-top in the center and the tilt and drop-off towards the edges. Comparisons to the 2D Gaussian fitting model are examined and we report that as the ratio between the source size and pinhole radius increases, the two models converge to one another.
- Kupinski, M. A., & Nishikawa, R. M. (2017). Medical Imaging 2017: Image Perception, Observer Performance, and Technology Assessment. In Medical Imaging 2017: Image Perception, Observer Performance, and Technology Assessment, 10136.
- Kupinski, M. A., Barrett, H. H., Furenlid, L. R., Momsen, N. C., & Richards, G. (2017). Maximum-Likelihood Event Parameter Estimation from Digital Waveform Capture. In 2017 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC).More infoA preclinical Single Photon Emission Computed Tomography (SPECT) system design is presented, that will be used in rabbit myocardial perfusion and related cardiac studies. The system includes digital-waveform capture of all PMT signals, which allows for optimal maximum-likelihood estimation in the gamma-ray event parameter estimation and tomographic reconstruction processes. In a typical gamma ray camera, only the integrated signal of a gamma ray event is recorded. Here, because the entire waveform is recorded, it is possible to incorporate information from the waveform shape into the maximum-likelihood estimation. A likelihood model for incorporating waveforms in estimating event parameters (x, y, z, energy, and time) is explored. The detector, a full-size clinical SPECT camera with 61 PMTs, was retrofitted with an array of active buffers that tap into raw low-level signals before they reach the Anger-logic network. Methods for system calibration and integration are discussed, along with predictions and measurements of system performance.
- Khalil, M., Brubaker, E. M., Hilton, N. R., Kupinski, M. A., Macgahan, C. J., & Marleau, P. A. (2016). Null-hypothesis testing using distance metrics for verification of arms-control treaties. In 2016 IEEE Nuclear Science Symposium, Medical Imaging Conference and Room-Temperature Semiconductor Detector Workshop (NSS/MIC/RTSD), 2017-.More infoWe investigate the feasibility of constructing a data-driven distance metric for use in null-hypothesis testing in the context of arms-control treaty verification. The distance metric is used in testing the hypothesis that the available data are representative of a certain object or otherwise, as opposed to binary-classification tasks studied previously. The metric, being of strictly quadratic form, is essentially computed using projections of the data onto a set of optimal vectors. These projections can be accumulated in list mode. The relatively low number of projections hampers the possible reconstruction of the object and subsequently the access to sensitive information. The projection vectors that channelize the data are optimal in capturing the Mahalanobis squared distance of the data associated with a given object under varying nuisance parameters. The vectors are also chosen such that the resulting metric is insensitive to the difference between the trusted object and another object that is deemed to contain sensitive information. Data used in this study were generated using the GEANT4 toolkit to model gamma transport using a Monte Carlo method. For numerical illustration, the methodology is applied to synthetic data obtained using custom models for plutonium inspection objects. The resulting metric based on a relatively low number of channels shows moderate agreement with the Mahalanobis distance metric for the trusted object but enabling a capability to obscure sensitive information.
- Barrett, H. H., Alberts, D. S., Woolfenden, J. M., Liu, Z., Clarkson, E. W., Kupinski, M. A., Furenlid, L. R., & Hoppin, J. (2015, august). Quantifying and Reducing Uncertainties in Cancer Therapy. In Proceedings of SPIE, 9412, 9412N-4.
- Ghanbari, N., Kupinski, M. A., & Furenlid, L. R. (2015, August). Optimization of an adaptive SPECT system with the scanning linear estimate. In SPIE, 9594, 95940A.
- Ghanbari, N., Kupinski, M. A., & Furenlid, L. R. (2015, August). Optimization of an adaptive SPECT system with the scanning linear estimator. In SPIE, 9594, 95940A.
- Huang, J., Yao, J., Cirucci, N., Ivanov, T., & Rolland, J. P. (2015). Thickness estimation with optical coherence tomography and statistical decision theory. In SPIE Optifab.
- Huang, J., Yuan, Q., Tankam, P., Clarkson, E., Kupinski, M., Hindman, H. B., Aquavella, J. V., & Rolland, J. P. (2015). Application of maximum-likelihood estimation in optical coherence tomography for nanometer-class thickness estimation. In SPIE BiOS.
- Lin, A. L., Johnson, L. C., Shokouhi, S., Peterson, T. E., & Kupinski, M. A. (2015). Using the Wiener estimator to determine optimal imaging parameters in a synthetic-collimator SPECT system used for small animal imaging. In SPIE Medical Imaging.
- Macgahan, C. J., Kupinski, M. A., Hilton, N. R., Brubaker, E. M., & Johnson, W. C. (2015). A channelized hotelling observer for treaty-verification tasks. In 2015 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 1-1.More infoBinary-discrimination tasks useful to arms-control-treaty verification were performed by applying mathematical observer models used by the medical-imaging community. It is a difficult task, as the monitor needs to verify a measured object is a warhead while the host wants to prevent dissemination of sensitive information on their weapons. Inspection objects were classified by their projection data, without reconstructing an image, which often contains sensitive information. Furthermore, the models process data event-by-event, with only a scalar test statistic being updated before the observed data is purged from memory, preventing the aggregation of information which would be sensitive. Template matching models were developed from a set of calibration data on these objects, with the ultimate goal being to develop an observer model that stores only non-sensitive information sufficient for confirmation. These models were also analyzed in the presence of nuisance parameters — unknowns in the objects or imaging system that affect the projection data but aren't of interest to the discrimination task. Examples of these include object location and orientation. The Hotelling observer and channelized Hotelling observer were modeled and their benefits to information security analyzed. The Hotelling observer-the ideal linear observer when the statistics of the data are Gaussian-stores only a set of weights which are the product of the inverse average covariance matrix and difference in mean data between the two objects in the discrimination task. When testing a source, an inner product of the Hotelling weights and binned testing data is taken, resulting in a scalar that is thresholded to make a decision. If nuisance parameters are present, the mean and covariance matrix are found by averaging not only over the always present Poisson noise, but the nuisance parameter distributions as well. Hence, even if detector data is gathered from multiple realizations of an object, the Hotelling weights will be a single smeared out data set the size of the measured data. The Hotelling weights also sifts out all information other than the differences between the two objects. Hence, if the monitor was to gain access to the weights and apply the inverse to their test statistic, they could only back out a scaled version of the template, not the image. The channelized Hotelling observer-a common tool used in medical image quality assessment-was also investigated. This method drastically reduces the size of the data by applying a channeiizing matrix to the binned testing data set. An optimal set of weights for the channelized data can then be found, and the inner product between these weights and the channelized vector results in the scalar test statistic. Optimal performance is retained by optimizing the matrix to maximize the SNR2 between the test-statistic distributions for the two sources in the task. The channelized Hotelling observer gives the monitor access to multiple non-sensitive test statistics. In practice, the channelizing matrix could be implemented in hardware or software behind an information barrier. The addition of penalty terms to the channelized Hotelling observer offers further potential for this method. Individual channel performance could be penalized, leading to a large number of non-sensitive channels that the monitor can use to verify the channelization routine is working as described without gaining access to sensitive information. Noise could be added to the resulting channels, causing additional reduction of the total stored information. Finally, if the host can define what information in its object geometries is sensitive in advance, they could optimize the differences between the two objects in the task while penalizing out the sensitive information, creating a non-sensitive channeiizing matrix that could be shared with the monitor. To test these models, Monte Carlo simulations were performed with the GEANT4 toolkit. Photons were tracked from plutonium inspection objects developed by Idaho National Laboratory. We simulated the Fast-Neutron Imaging system designed by Oak Ridge National Laboratory and Sandia National Laboratories, which consists of 40×40 1cm2 liquid scintillator pixels with a plastic coded aperture. Observer models were evaluated using the area under the ROC curve. This work is supported by the Office of Defense Nuclear Nonproliferation Research and Development, Nuclear Weapon and Material Security Team. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. (SAND2015-10391A).
- Tseng, H., Fan, J., & Kupinski, M. A. (2015). Combination of detection and estimation tasks using channelized scanning linear observer for CT imaging systems. In SPIE Medical Imaging.
- Wang, K., Lou, Y., Kupinski, M. A., & Anastasio, M. A. (2015). Sparsity-driven ideal observer for computed medical imaging systems. In SPIE Medical Imaging.
- Barrett, H. H., Barber, H. B., Kupinski, M. A., Stevenson, G. D., Woolfenden, J. M., Clarkson, E. W., Furenlid, L. R., & Liu, Z. (2014). Molecular Imaging in the College of Optical Sciences - An Overview of Two Decades of Instrumentation Development.. In Proceedings of SPIE--the International Society for Optical Engineering, 9186.More infoDuring the past two decades, researchers at the University of Arizona's Center for Gamma-Ray Imaging (CGRI) have explored a variety of approaches to gamma-ray detection, including scintillation cameras, solid-state detectors, and hybrids such as the intensified Quantum Imaging Device (iQID) configuration where a scintillator is followed by optical gain and a fast CCD or CMOS camera. We have combined these detectors with a variety of collimation schemes, including single and multiple pinholes, parallel-hole collimators, synthetic apertures, and anamorphic crossed slits, to build a large number of preclinical molecular-imaging systems that perform Single-Photon Emission Computed Tomography (SPECT), Positron Emission Tomography (PET), and X-Ray Computed Tomography (CT). In this paper, we discuss the themes and methods we have developed over the years to record and fully use the information content carried by every detected gamma-ray photon.
- Chaix, C., Kovalsky, S., Kupinski, M. A., Barrett, H. H., & Furenlid, L. R. (2014). Fabrication of the pinhole aperture for AdaptiSPECT. In SPIE Optical Engineering+ Applications, 921408--921408.
- Chaix, C., Kovalsky, S., Kupinski, M. A., Barrett, H. H., & Furenlid, L. R. (2014, Fall). Design and fabrication of a preclinical adaptive SPECT imaging system: AdaptiSPECT. In GPSC Student Showcase.
- Hilton, N. R., Johnson, W. C., Brubaker, E., Kupinski, M. A., Macgahan, C. J., & Macgahan, C. J. (2014). Optimal imaging for treaty verification FY2014 annual report. In NA22 Review.
- Huang, J., Clarkson, E., Kupinski, M., & Rolland, J. P. (2014). Simultaneous measurement of lipid and aqueous layers of tear film using optical coherence tomography and statistical decision theory. In SPIE BiOS, 89360A--89360A.
- Macgahan, C. J., Kupinski, M. A., Hilton, N. R., Johnson, W. C., & Brubaker, E. M. (2014). Development of a list-mode ideal observer to perform classification tasks when imaging nuclear inspection objects under signal-known-exactly conditions. In 2014 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 1-5.More infoWe developed a signal-known-exactly version of the ideal observer that processes data in list-mode format to perform binary classification, a useful task for arms-control treaty applications. This observer offers the best possible performance and future observer models developed in our work will be compared to this model. The two examined sources were plutonium inspection objects developed by Idaho National Lab. We modeled a fast-neutron coded-aperture imager, developed by Oak Ridge National Lab and Sandia National Labs to acquire simulation data. Monte Carlo simulations using the GEANT4 toolkit tracked photons and neutrons from these objects to the imager. The observer model was evaluated using the area under the ROC curve for multiple background strengths.
- Tseng, H., Tseng, H., Fan, J., Fan, J., Kupinski, M. A., Kupinski, M. A., Sainath, P., & Sainath, P. (2014). Design of a practical model-observer-based image quality assessment method for CT imaging systems. In SPIE Medical Imaging, 90370O--90370O.
- Welge, W. A., DeMarco, A. T., Watson, J. M., Rice, P. S., Barton, J. K., & Kupinski, M. A. (2014). Objective assessment of multimodality optical coherence tomography and second-harmonic generation image quality of ex vivo mouse ovaries using human observers. In SPIE BiOS, 893609--893609.
- Caucci, L., Jha, A. K., Furenlid, L. R., Clarkson, E. W., Kupinski, M. A., & Barrett, H. H. (2013). Image science with photon-processing detectors. In Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 2013 IEEE, 1--7.
- Dumas, C., Bernstein, A., Espinoza, A., Morgan, D., Lewis, K., Nipper, M., Barrett, H. H., Kupinski, M. A., & Furenlid, L. R. (2013). SmartCAM: an adaptive clinical SPECT camera. In SPIE Optical Engineering+ Applications, 885307--885307.
- Fan, J., Tseng, H., Kupinski, M., Cao, G., Sainath, P., & Hsieh, J. (2013). Study of the radiation dose reduction capability of a CT reconstruction algorithm: LCD performance assessment using mathematical model observers. In SPIE Medical Imaging, 86731Q--86731Q.
- Huang, J., Clarkson, E., Kupinski, M., & Rolland, J. P. (2013). Thickness Estimation with Optical Coherence Tomography and Statistical Decision Theory. In CIOMP-OSA Summer Session on Optical Engineering, Design and Manufacturing, Tu9.
- Jha, A. K., Clarkson, E., Kupinski, M. A., & Barrett, H. H. (2013). Joint reconstruction of activity and attenuation map using LM SPECT emission data. In SPIE Medical Imaging, 86681W--86681W.
- Huang, J., Lee, K., Clarkson, E., Kupinski, M., & Rolland, J. P. (2012). Task-based Assessment and Optimization of Spectral Domain Optical Coherence Tomography for Tear Film Imaging. In Frontiers in Optics, FTu3A--39.
- Jha, A., Kupinski, M., & Van Dam, H. (2011). Monte Carlo simulation of Silicon Photomultiplier output in response to scintillation induced light. In Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 2011 IEEE, 1693--1696.
- Barrett, H. H., Wilson, D. W., Kupinski, M. A., Aguwa, K., Ewell, L., Hunter, R., & M\"uller, S. (2010). Therapy operating characteristic (TOC) curves and their application to the evaluation of segmentation algorithms. In SPIE Medical Imaging, 76270Z--76270Z.
- Jha, A. K., Kupinski, M. A., Kang, D., & Clarkson, E. (2010). Solutions to the radiative transport equation for non-uniform media. In Biomedical Optics, BSuD55.
- Jha, A. K., Kupinski, M. A., Rodr\'\iguez, J. J., Stephen, R. M., & Stopeck, A. T. (2010). ADC estimation in multi-scan DWMRI. In Digital Image Processing and Analysis, DTuB3.
- Jha, A. K., Kupinski, M. A., Rodr\'\iguez, J. J., Stephen, R. M., & Stopeck, A. T. (2010). Evaluating segmentation algorithms for diffusion-weighted MR images: a task-based approach. In SPIE Medical Imaging, 76270L--76270L.
- Jha, A. K., Kupinski, M. A., Rodriguez, J., Stephen, R. M., & Stopeck, A. T. (2010). ADC estimation of lesions in diffusion-weighted MR images: A maximum-likelihood approach. In Image Analysis \& Interpretation (SSIAI), 2010 IEEE Southwest Symposium on, 209--212.
- Young, S., Kupinski, M. A., & Jha, A. K. (2010). Estimating signal detectability in a model diffuse optical imaging system. In Biomedical Optics, BSuD26.
- Palit, R., Kupinski, M. A., Barrett, H. H., Clarkson, E. W., Aarsvold, J. N., Volokh, L., & Grobshtein, Y. (2009). Singular value decomposition of pinhole SPECT systems. In SPIE Medical Imaging, 72631U--72631U.
- Caucci, L., Kupinski, M. A., Freed, M., Furenlid, L. R., Wilson, D. W., & Barrett, H. H. (2008). Adaptive SPECT for tumor necrosis detection. In Nuclear Science Symposium Conference Record, 2008. NSS'08. IEEE, 5548--5551.
- Furenlid, L., Moore, J., Freed, M., Kupinski, M. A., Clarkson, E., Liu, Z., Wilson, D., Woolfenden, J., & Barrett, H. H. (2008). Adaptive small-animal SPECT/CT. In Biomedical Imaging: From Nano to Macro, 2008. ISBI 2008. 5th IEEE International Symposium on, 1407--1410.
- Kupinski, M. A., Liu, Z., Woolfenden, J. M., Barrett, H. H., Clarkson, E., Freed, M., Furenlid, L. R., Kupinski, K., Moore, J. W., & Wilson, D. W. (2008). ADAPTIVE SMALL-ANIMAL SPECT/CT.. In IEEE Nuclear Science Symposium conference record. Nuclear Science Symposium, 2008, 1407-1410.More infoWe are exploring the concept of adaptive multimodality imaging, a form of non-linear optimization where the imaging configuration is automatically adjusted in response to the object. Preliminary studies suggest that substantial improvement in objective, task-based measures of image quality can result. We describe here our work to add motorized adjustment capabilities and a matching CT to our existing FastSPECT II system to form an adaptive small-animal SPECT/CT.
- Br\`eme, A., Kupinski, M. A., Clarkson, E., & Barrett, H. H. (2007). Adaptive Hotelling discriminant functions. In Medical Imaging, 65150T--65150T.
- Freed, M., Kupinski, M. A., Furenlid, L. R., & Barrett, H. H. (2007). A prototype instrument for adaptive SPECT imaging. In Medical Imaging, 65100V--65100V.
- Hesterman, J. Y., Kupinski, M. A., Clarkson, E., Wilson, D. W., & Barrett, H. H. (2007). Evaluation of hardware in a small-animal SPECT system using reconstructed images. In Medical Imaging, 65151G--65151G.
- Kupinski, M. A., Clarkson, E., & Hesterman, J. Y. (2007). Bias in Hotelling observer performance computed from finite data. In Medical Imaging, 65150S--65150S.
- Park, S., Clarkson, E., Barrett, H. H., Kupinski, M. A., & Myers, K. J. (2006). Performance of a channelized-ideal observer using Laguerre-Gauss channels for detecting a Gaussian signal at a known location in different lumpy backgrounds. In Medical Imaging, 61460P--61460P.
- Barrett, H. H., Clarkson, E., Furenlid, L. R., & Kupinski, M. (2005). Task-based Assessment and Optimization of Gamma-ray Imaging Systems. In Frontiers in Optics, FThM1.
- Barrett, H. H., Kupinski, M. A., & Clarkson, E. (2005). Probabilistic foundations of the MRMC method. In Medical imaging, 21--31.
- Gross, K. A., & Kupinski, M. A. (2005). SPECT Image Quality Assessment and System Parameter Optimization for Detection Tasks. In Frontiers in Optics, FThM2.
- Gross, K. A., Kupinski, M. A., & Hesterman, J. Y. (2005). A fast model of a multiple-pinhole SPECT imaging system. In Medical Imaging, 118--127.
- Hesterman, J. Y., Kupinski, M. A., Furenlid, L. R., & Wilson, D. W. (2005). Experimental task-based optimization of a four-camera variable-pinhole small-animal SPECT system. In Medical Imaging, 300--309.
- Kupinski, M. A., & Clarkson, E. (2005). Extending the channelized Hotelling observer to account for signal uncertainty and estimation tasks. In Medical Imaging, 183--190.
- Park, S., Clarkson, E., Kupinski, M. A., & Barrett, H. H. (2005). Efficiency of human and model observers for signal-detection tasks in non-Gaussian distributed lumpy backgrounds. In Medical Imaging, 138--149.
- Kupinski, M. A., & Clarkson, E. (2004). Image-quality assessment in optical tomography. In Biomedical Imaging: Nano to Macro, 2004. IEEE International Symposium on, 1471--1474.
- Park, S., Kupinski, M. A., Clarkson, E., & Barrett, H. H. (2004). Efficient channels for the ideal observer. In Medical Imaging 2004, 12--21.
- Clarkson, E., Kupinski, M. A., & Hoppin, J. W. (2003). Assessing the accuracy of estimates of the likelihood ratio. In Medical Imaging 2003, 135--143.
- Gross, K., Kupinski, M. A., Peterson, T. E., & Clarkson, E. (2003). Optimizing a multiple-pinhole SPECT system using the ideal observer. In Medical Imaging 2003, 314--322.
- Hoppin, J. W., Kupinski, M. A., Wilson, D. W., Peterson, T. E., Gershman, B., Kastis, G., Clarkson, E., Furenlid, L., & Barrett, H. H. (2003). Evaluating estimation techniques in medical imaging without a gold standard: experimental validation. In Medical Imaging 2003, 230--237.
- Kupinski, M. A., & Kupinski, K. (2003). Guest editor's introduction - Computing in optics. In Computing in Science & Engineering, 5, 13-14.More infoComputing is pervasive in almost all modern scientific endeavors, but nowhere is the need for computing power and efficient computational techniques more apparent than in the optics and imaging fields.
- Kupinski, M. A., Barrett, H. H., Clarkson, E., & Park, S. (2003). Ideal-observer performance under signal and background uncertainty.. In Information processing in medical imaging : proceedings of the ... conference, 18, 342-53.More infoWe use the performance of the Bayesian ideal observer as a figure of merit for hardware optimization because this observer makes optimal use of signal-detection information. Due to the high dimensionality of certain integrals that need to be evaluated, it is difficult to compute the ideal observer test statistic, the likelihood ratio, when background variability is taken into account. Methods have been developed in our laboratory for performing this computation for fixed signals in random backgrounds. In this work, we extend these computational methods to compute the likelihood ratio in the case where both the backgrounds and the signals are random with known statistical properties. We are able to write the likelihood ratio as an integral over possible backgrounds and signals, and we have developed Markov-chain Monte Carlo (MCMC) techniques to estimate these high-dimensional integrals. We can use these results to quantify the degradation of the ideal-observer performance when signal uncertainties are present in addition to the randomness of the backgrounds. For background uncertainty, we use lumpy backgrounds. We present the performance of the ideal observer under various signal-uncertainty paradigms with different parameters of simulated parallel-hole collimator imaging systems. We are interested in any change in the rankings between different imaging systems under signal and background uncertainty compared to the background-uncertainty case. We also compare psychophysical studies to the performance of the ideal observer.
- Kupinski, M. A., Clarkson, E., Gross, K., & Hoppin, J. W. (2003). Optimizing imaging hardware for estimation tasks. In Medical Imaging 2003, 309--313.
- Drukker, K., Giger, M., Horsch, K., Kupinski, M., Vyborny, C., & Mendelson, E. (2002). Automatic lesion detection on breast ultrasound. In RADIOLOGY, 222, 588--588.
- Kupinski, M. A., Clakrson, E., & Barrett, H. H. (2002). Matching statistical object models to real images. In Medical Imaging 2002, 37--42.
- Edwards, D. C., Papaioannou, J., Jiang, Y., Kupinski, M. A., & Nishikawa, R. M. (2001). Eliminating false-positive microcalcification clusters in a mammography CAD scheme using a Bayesian neural network. In Medical Imaging 2001, 1954--1960.
- Hoppin, J., Kupinski, M., Kastis, G., Clarkson, E., & Barrett, H. H. (2001). Objective comparison of quantitative imaging modalities without the use of a gold standard. In SIAM Annual Meeting.More infoImaging is often used for the purpose of estimating the value of some parameter of interest. For example, a cardiologist may measure the ejection fraction (EF) of the heart in order to know how much blood is being pumped out of the heart on each stroke. In clinical practice, however, it is difficult to evaluate an estimation method because the gold standard is not known, e.g., a cardiologist does not know the true EF of a patient. Thus, researchers have often evaluated an estimation method by plotting its results against the results of another (more accepted) estimation method, which amounts to using one set of estimates as the pseudogold standard. In this paper, we present a maximum likelihood approach for evaluating and comparing different estimation methods without the use of a gold standard with specific emphasis on the problem of evaluating EF estimation methods. Results of numerous simulation studies will be presented and indicate that the method can precisely and accurately estimate the parameters of a regression line without a gold standard, i.e., without the x-axis.
- Edwards, D. C., Kupinski, M. A., Nishikawa, R. M., & Metz, C. E. (2000). Estimation of linear observer templates in the presence of multi-peaked Gaussian noise through 2AFC experiments. In Medical Imaging 2000, 86--96.
- Esthappan, J., Kupinski, M., Lan, L., & Hoffmann, K. (2000). A method for the determination of the 3D orientations and positions of catheters from single-plane X-ray images. In Engineering in Medicine and Biology Society, 2000. Proceedings of the 22nd Annual International Conference of the IEEE, 3, 2029--2032.
- Kupinski, M. A., Anastasio, M. A., & Giger, M. L. (2000). Multiobjective genetic optimization of diagnostic classifiers used in the computerized detection of mass lesions in mammography. In Medical Imaging 2000, 40--45.
- Kupinski, M., Giger, M., & Baehr, A. (1999). Computerized detection of mass lesions in digital mammography using radial gradient index filtering. In Radiology, 213, 229--229.
- Nishikawa, R., Giger, M., Yarusso, L., Kupinski, M., Baehr, A., & Venta, L. (1999). Computer-aided diagnosis (CAD) of images obtained on full-field digital mammography. In Radiology, 213, 229--229.
- Anastasio, M. A., Kupinski, M. A., Nishikawa, R. M., & Giger, M. L. (1998). A multiobjective approach to optimizing computerized detection schemes. In Nuclear Science Symposium, 1998. Conference Record. 1998 IEEE, 3, 1879--1883.
- Anastasio, M. A., Kupinski, M., & Pan, X. (1998). New classes of reconstruction methods in reflection mode diffraction tomography. In Ultrasonics Symposium, 1998. Proceedings., 1998 IEEE, 1, 839--842.
- Kupinski, M., & Giger, M. (1998). Computer-aided diagnosis: Feature selection with limited datasets. In RADIOLOGY, 209, 163--163.
- Anastasio, M., Kupinski, M., & Pan, X. (1997). Noise properties of reconstructed images in ultrasonic diffraction tomography. In Nuclear Science Symposium, 1997. IEEE, 2, 1561--1565.
- Kupinski, M. A., & Giger, M. L. (1997). Feature selection and classifiers for the computerized detection of mass lesions in digital mammography. In Neural Networks, 1997., International Conference on, 4, 2460--2463.
- Kupinski, M., & Giger, M. (1997). Investigation of regularized neural networks for the computerized detection of mass lesions in digital mammograms. In Engineering in Medicine and Biology Society, 1997. Proceedings of the 19th Annual International Conference of the IEEE, 3, 1336--1339.
- Kupinski, M. A., Giger, M. L., Lu, P., & Huo, Z. (1995). Computerized detection of mammographic lesions: Performance of artificial neural network with enhanced feature extraction. In Medical Imaging 1995, 598--605.
Others
- Brubaker, E., MacGahan, C., Kupinski, M., Hilton, N. R., & Johnson, W. C. (2015). Information Barriers for Imaging..
- Giger, M. L., & Kupinski, M. A. (2000). Method and system for the segmentation and classification of lesions.
- Kupinski, M. A. (2000). Computerized pattern classification in medical imaging.
- Kupinski, M. A. (2000). Investigation of Genetic Algorithms for Computer-Aided Diagnosis.
