Neural Ensemble Model Classifies How Galaxies Interact

Published on January 15, 2026 | Translated from Spanish
Representative image showing two galaxies in the process of interaction or merger, with a heatmap overlay generated by the LIME tool that highlights the key regions the neural model considers for its classification.

A Neural Ensemble Model Classifies How Galaxies Interact

Classifying encounters between galaxies is complex due to their intricate shapes and because deep learning models often operate as black boxes. A new proposal solves this with an attentive neural ensemble that fuses AG-XCaps, H-SNN, and ResNet-GRU architectures. This system is trained on the Galaxy Zoo DESI dataset and enhanced with the LIME tool to produce results that astronomers can understand. 🪐

A Framework that Surpasses Classical Methods

The ensemble model achieves exceptional metrics: a precision of 0.95, a recall of 1.00, an F1 score of 0.97, and an accuracy of 96%. Its performance clearly surpasses a reference model based on Random Forest, reducing false positives from 70 to just 23 cases. Additionally, its design is lightweight, with a size of 0.45 MB, allowing it to scale for analyzing the enormous volumes of data that future missions like Euclid and the LSST will produce.

Key Advantages of the Neural Ensemble:
  • High precision and recall for reliably identifying galactic interactions.
  • Lightweight architecture that facilitates processing large astronomical image catalogs.
  • Significantly reduces classification errors compared to traditional techniques.
The combination of high performance, reduced size, and ability to explain decisions positions this framework as a practical solution for current and future observatories.

Explainability as a Fundamental Pillar

Integrating LIME (Local Interpretable Model-agnostic Explanations) is a crucial component. This tool generates heatmaps that indicate which pixels or regions of a galaxy image most influenced the model's decision. This allows researchers to understand and validate predictions, fostering trust in artificial intelligence tools within the astronomical community.

Features of Integrated Explainability:
  • Produces intuitive visualizations that highlight key morphological features.
  • Helps astronomers verify the physical basis behind each classification.
  • Converts

Related Links