Does zeroing out a hard drive guarantee total file elimination?



@Vrai -- thanks. Will check it out.
 
I have a faint memory that a lot of the stuff about sure erasing of drives was sort of a lingering myth from the days that the 'tracks' of the spinning disk drives were relatively large in terms of their molecular size, and you could actually get a sort of 'shadow' at the edges of each track of data. I believe with newer drives, which are far more data dense, this is no longer the case, so a zero or random overwrite basically nukes all the data for all practical purposes.

I use cat /dev/zero > drive or cat /dev/random > drive to clear the drives, and when it's actual data that could in theory hurt me or clients, like financial, logins, etc, I use a hammer and a nail, which shatters the disks internally completely. It's pretty easy driving a big nail through a drive, so that's the way to go. If the data is sensitive enough to where you want to do multiple overwrites, which takes FOREVER on modern large drives, it's worth considering physically destroying them instead.

If you've ever watched Louis Rossman recover data on youtube from flash drives, you'd be fairly aware that the only way to really be sure you fully got rid of the data is to burn the device, or cut it into bits smaller than the flash storage chips. If you cut the actual chip the data is gone for sure.

Digital drives are slightly odd in how they work and write/rewrite data, but I think assuming that they have been trimmed to get rid of deleted data (on ssds), a full zero write should take care of it also, since there are no physical magnetized borders to hold the data, if you replace all the data on a digital drive, it's replaced, period. Trim is weird, I can't remember if that works at the core drive level, or the file system level.

I believe its' getting quite close to the realm of magical thinking to believe that a storage bit once flipped to be a zero or a random character can then retain some shadow image of the previous bit stored there. Or bytes, I don't know what size flash memory stores at natively.

I feel really sorry for this generation of ssd and flash drive users who do not grasp that all electronics die, eventually, sooner or later, so without backups, they have no data, at least not without spending a small fortune on data recovery.
 
I have a faint memory that a lot of the stuff about sure erasing of drives was sort of a lingering myth from the days that the 'tracks' of the spinning disk drives were relatively large in terms of their molecular size, and you could actually get a sort of 'shadow' at the edges of each track of data. I believe with newer drives, which are far more data dense, this is no longer the case, so a zero or random overwrite basically nukes all the data for all practical purposes.

I use cat /dev/zero > drive or cat /dev/random > drive to clear the drives, and when it's actual data that could in theory hurt me or clients, like financial, logins, etc, I use a hammer and a nail, which shatters the disks internally completely. It's pretty easy driving a big nail through a drive, so that's the way to go. If the data is sensitive enough to where you want to do multiple overwrites, which takes FOREVER on modern large drives, it's worth considering physically destroying them instead.

If you've ever watched Louis Rossman recover data on youtube from flash drives, you'd be fairly aware that the only way to really be sure you fully got rid of the data is to burn the device, or cut it into bits smaller than the flash storage chips. If you cut the actual chip the data is gone for sure.

Digital drives are slightly odd in how they work and write/rewrite data, but I think assuming that they have been trimmed to get rid of deleted data (on ssds), a full zero write should take care of it also, since there are no physical magnetized borders to hold the data, if you replace all the data on a digital drive, it's replaced, period. Trim is weird, I can't remember if that works at the core drive level, or the file system level.

I believe its' getting quite close to the realm of magical thinking to believe that a storage bit once flipped to be a zero or a random character can then retain some shadow image of the previous bit stored there. Or bytes, I don't know what size flash memory stores at natively.

I feel really sorry for this generation of ssd and flash drive users who do not grasp that all electronics die, eventually, sooner or later, so without backups, they have no data, at least not without spending a small fortune on data recovery.
Or as we discussed before, melting does the job. I'd be willing to that zeroing and scrambling a few times each would guarantee loss of integrity to the point of being unreadable
 
The Gutmann method is an algorithm for securely erasing the contents of computer hard disk drives. It was designed by Peter Gutmann and Colin Plumb in 1996. It achieves a complete erase of the disk contents by repeatedly writing 35 segments in the sectors to be erased. It will use a random character, not just the zeroes used in other techniques, for the first 4 and last 4 passes. A complex overwrite pattern is used from the 5th to the 31st pass. In total, it will write 35 times. This is the most thorough erasing method, but it also takes a long time due to the multiple erasing times.

The Gutmann method is non-recoverable and there is no known way to use software to recover it. It cannot be done with software alone since the storage device only returns its current contents via its normal interface.

The Shred command in Linux does the Gutmann method see here - https://www.gnu.org/software/coreutils/manual/html_node/shred-invocation.html#shred-invocation shred is part of coreutils
 
Last edited by a moderator:

Members online


Top