Nectome and Digital Immortality: Promise, Science, and Skepticism

Published on March 30, 2026 | Translated from Spanish

The company Nectome proposes a radical path to immortality: preserving the brain after medically assisted death to, in a distant future, map its connectome and resurrect consciousness in a digital or biological medium. This idea, at the intersection of neuroscience and AI, generates fascination and profound skepticism. Beyond the technology, it raises a crucial question: how do we socially manage the promises of speculative futures that confront insurmountable technical, legal, and ethical realities today.

Human brain preserved in transparent resin, with fine threads of digital light traversing its neuronal structure.

The three pillars of the problem: science, law, and philosophy 🤔

Scientifically, the obstacle is monumental. Having a static map of neuronal connections connectome does not equate to understanding the dynamics of consciousness. It's like having the blueprint of a powered-off computer without knowing the software. Legally, the method clashes head-on with the near-universal prohibition of euthanasia, as it requires the patient to be alive during preservation to avoid brain damage. Philosophically, the question of identity arises: a digital simulation recreated from data, even if perfect, would be a copy or the real continuity of the original self? These fundamental uncertainties turn the proposal into a bet of faith on distant technological progress.

The social impact of speculative technological promises ⚖️

Cases like Nectome serve as a study on the social impact of AI and digitalization. They create powerful narratives, such as digital immortality, that shape public expectations and divert attention from immediate ethical problems. Communities that debate these topics, like Foro3D, face the challenge of separating rigorous science from business speculation. This phenomenon can generate credibility crises when excessive promises clash with reality, reminding us of the need for critical thinking in the face of futures sold as inevitable.

What metrics would you use to measure the community's sentiment toward an AI?