Jump to navigation

The University of Arizona Wordmark Line Logo White
UA Profiles | Home
  • Phonebook
  • Edit My Profile
  • Feedback

Profiles search form

Abhijit Mahalanobis

  • Associate Professor, Electrical and Computer Engineering
  • Member of the Graduate Faculty
Contact
  • (520) 621-2434
  • Electrical & Computer Engr, Rm. 230
  • Tucson, AZ 85721
  • amahalan@arizona.edu
  • Bio
  • Interests
  • Courses
  • Scholarly Contributions

Bio

No activities entered.

Related Links

Share Profile

Interests

No activities entered.

Courses

2025-26 Courses

  • Dissertation
    ECE 920 (Fall 2025)
  • Fund Optics/Elec Engrs
    ECE 459 (Fall 2025)
  • Fund Optics/Elec Engrs
    ECE 559 (Fall 2025)

2024-25 Courses

  • Dissertation
    ECE 920 (Spring 2025)
  • Eng Appl Machine Learning
    ECE 523 (Spring 2025)
  • Research
    ECE 900 (Spring 2025)
  • Dissertation
    ECE 920 (Fall 2024)
  • Research
    ECE 900 (Fall 2024)
  • Thesis
    ECE 910 (Fall 2024)

2023-24 Courses

  • Dissertation
    ECE 920 (Spring 2024)
  • Eng Appl Machine Learning
    ECE 523 (Spring 2024)
  • Research
    ECE 900 (Spring 2024)
  • Thesis
    ECE 910 (Spring 2024)
  • Thesis
    OPTI 910 (Spring 2024)
  • Dissertation
    ECE 920 (Fall 2023)
  • Research
    ECE 900 (Fall 2023)

2022-23 Courses

  • Dissertation
    ECE 920 (Spring 2023)
  • Eng Appl Machine Learning
    ECE 523 (Spring 2023)
  • Research
    ECE 900 (Spring 2023)
  • Dissertation
    ECE 920 (Fall 2022)
  • Research
    ECE 900 (Fall 2022)

Related Links

UA Course Catalog

Scholarly Contributions

Journals/Publications

  • Driggers, R. G., Mahalanobis, A., Grimming, R., & McIntosh, B. (2021). LWIR sensor parameters for deep learning object detectors. OSA Continuum, 4(2), 529. doi:10.1364/osac.404600

Proceedings Publications

  • Mahalanobis, A., & Tayyab, M. (2022). Simultaneous Learning and Compression for Convolution Neural Networks. In ICIP 2022.
    More info
    Neural network compression techniques almost always operate on pretrained filters. In this paper we propose a sparse training method for simultaneous compression and learning, which operates in the eigen space of the randomly initialized filters and learns to compactly represent the network as it trains from scratch. This eliminates the usual two-step process of having to first train the network, and then compressing it afterwards. To learn the sparse representations we enforce group L1 regularization on the linear combination weights of eigen filters. This results in the recombined filters which have low rank and can be readily compressed with standard pruning and low rank approximation methods. Moreover we show that the L1 norm of the linear combination weights can be used as a proxy for the filter importance for pruning. We demonstrate the effectiveness of our method by applying it to several CNN architectures, and show that our method directly achieves the best compression with competitive performance accuracy as compared to state of the art methods for compressing pre-trained networks.
  • Mahalanobis, A. (1989). Minimum Variance SDF Design Using Adaptive Algorithms. In SPIE.

Profiles With Related Publications

  • Ronald G Driggers

 Edit my profile

UA Profiles | Home

University Information Security and Privacy

© 2025 The Arizona Board of Regents on behalf of The University of Arizona.