Lets go over this carefully as the concepts are slippery.
Lack of continuous experience of an event does not mean you died, it is merely memory loss.
Assuming the very reasonable theory that consciousness is classical and emergent from the network structure and interactions of neurons and glia then it should be possible to encode this consciousness on a turing machine. If we go a step further and replace certain collections of simulated cells with black boxes that behave the same way given an isomorphic set of inputs then we can have consciousness even more cheaply.
Your idea is not necessarily more grounded. Assumptions you make are: the substrate does not matter as long as it is not digital, you assume the replacement material will not itself have effects that result in vastly divergent actions in the long run. Yet the perturbations a neuron suffers will differ from those of nanodiamodons, will this be significant? You also take for granted the new replacement brain will not allow a very large viewpoint shift due to the new complexities and speeds of thought possible. That some invariance of self remains from morphism to morphism. That there is more resemblance between you and the the final being than you and a lemur like ancestor.
It is likely that the beings of the future will have far more sophisticated definitions of self and identity.
I personally think that physical bodies will give way to digital minds and its only a matter of time. Whether linear time or log time I can't say. But physical bodies are resource hogs. Progress is energy intensive and so is ever more complex thinking. Imagine a being whose memory is so dense that its thoughts have a gravitational pull of their own and its mind risks collapsing into a blackhole... Eventually theres going to be a lot of pressure to compress thinking beings and squeeze as much thinking capacity from matter as efficiently as possible.
You not only misstate my assumptions, but seem to miss my point entirely.
Here is my point: the original human will not experience continuity, therefore it's instinct for self-preservation will not be satisfied. The best it can hope for is to take comfort in knowing that a copy will survive. Personally, this does not comfort me.
I don't doubt the possibility of a very high fidelity copy and supporting simulation, for all intents and purposes. I also think there may be sound reasons to pursue it. But I don't think it will benefit those who are copied, beyond any positive thoughts and feelings it may give them before they die.
Being selfish and programmed for self-preservation, I desire physical immortality instead. I have no interest in donating a copy of my memories to a simulation project.
I hope this clears it up; my interest is quickly waning, and I have work to do.
The claim to the identity is not the issue, the issue is that the person who got copied will die and experience that death.
The existence of a copy does not make the original person to resucitate or otherwise keep perceiving and thinking.
To express it in a bad analogy: you can have a bit by bit backup copy of a harddrive, but when a power surge burns the CPU and the disk, you have to throw both away. You can buy a new CPU and place the backup, but the hardware is different, there is a shutdown moment, and when you power back, the continuity is lost, it's a different entity what gets booted up.
To preserve the consciousness of a person, between the "hardware change" I see no other option to the existence of something like a central repository of consciousness outside both the body and the computer hosting the simulation that gets automatically attached to a particular set of memories/perceptions/experiences (and whatever else defines a consciousness), so when you die, it stores your consciousness and when the simulation is booted up, the continuity is triggered. I find that far fetched.
In abstract, philosophical terms, it might not be. Even in policy terms it's probably not (other than it might be cheaper to store people in SANs.) In practical, day to day terms, of course it is! My own instance of self doesn't want to cease to exist.
Lack of continuous experience of an event does not mean you died, it is merely memory loss.
Assuming the very reasonable theory that consciousness is classical and emergent from the network structure and interactions of neurons and glia then it should be possible to encode this consciousness on a turing machine. If we go a step further and replace certain collections of simulated cells with black boxes that behave the same way given an isomorphic set of inputs then we can have consciousness even more cheaply.
Your idea is not necessarily more grounded. Assumptions you make are: the substrate does not matter as long as it is not digital, you assume the replacement material will not itself have effects that result in vastly divergent actions in the long run. Yet the perturbations a neuron suffers will differ from those of nanodiamodons, will this be significant? You also take for granted the new replacement brain will not allow a very large viewpoint shift due to the new complexities and speeds of thought possible. That some invariance of self remains from morphism to morphism. That there is more resemblance between you and the the final being than you and a lemur like ancestor.
It is likely that the beings of the future will have far more sophisticated definitions of self and identity.
I personally think that physical bodies will give way to digital minds and its only a matter of time. Whether linear time or log time I can't say. But physical bodies are resource hogs. Progress is energy intensive and so is ever more complex thinking. Imagine a being whose memory is so dense that its thoughts have a gravitational pull of their own and its mind risks collapsing into a blackhole... Eventually theres going to be a lot of pressure to compress thinking beings and squeeze as much thinking capacity from matter as efficiently as possible.