Lost Hd Space After Aborted Free Space Erase.

Posted on by admin

Wiping free disk space. When you delete a file, Windows removes the reference to that file, but doesn't delete the actual data that made up the file on your hard.

First I would like to thank you for a great yet simple to use application. Now to solving an issue. I tried to run 'Wipe Free Space' on one of my hard drive partitions (271 GB) in Linux (Mint 16 Cinnamon). It progressed fine until a certain point (about 80% of the way) then stopped. I tried again and the same thing happened. After the attempts I found two folder in the partition root with weird names and weird behaviour.

I can enter the folder, but it won't list any files, although the cursor shows that it keeps scanning for files. I tried deleting the folders with the following methods:. Via Nemo file manager,. Using 'rm -Rf ' command and even tried to shred the,. With Bleachbit The first two methods would do nothing.

On the other hand, Bleachbit would run forever, listing strange filenames and folder, and even when it shows 'Done' in the progress bar, it would continue to list files and never get to actually deleting the folders. Any suggestions? What happens when you try?

If there is an error message, please copy it verbatim. Which file system are you using (ext3, ext4, btrfs)? If you are not sure, run df -T /folder where /folder is the path of the folder with the problem. I just did a test with ext4 and Linux 3.11 (Ubuntu 13.10). I created a 1GB partition and ran BleachBit until it made about 61K empty files, which exhausted the inodes. Then I was able to count them like this ls -lah /tmp/ext4.mount/rkmOODvbML/ wc -l And delete them like this sudo rm -rf /tmp/ext4.mount/rkmOODvbML.

I was able to delete one of the directories using the rm command. The other one is still there. As for your questions:. The files system is ext4.

Running the ls command as instructed give me the following result: ls: cannot access /media/docs/CBDgtQ77L: No such file or directory 0. Running sudo rm -rf return immediately to prompt withour any visible results.

Even running sudo rm -rfv goes back to the prompt immediately without any output. Any other suggestions on how to remove this stubborn directory?. or to post comments.

To answer your questions, here is the sequence of commands I ran in my terminal. Any other suggestions? Ronen@ronen-vostro $ ls /media/docs CBDgtQ77L Documents Downloads lost+found Music Pictures Videos ronen@ronen-vostro $ ls -lah /media/docs/CBDgtQ77L wc -lls: cannot access /media/docs/CBDgtQ77L: No such file or directory 0 ronen@ronen-vostro $ sudo rm -rfv /media/docs/CBDgtQ77L sudo password for ronen: ronen@ronen-vostro $ ls /media/docs CBDgtQ77L Documents Downloads lost+found Music Pictures Videos ronen@ronen-vostro $. or to post comments. Try logging in as root. Go to AdministrationLogin WindowOptions then select Allow root login. Logout of your current session.

Login by typing 'root' without the quotation marks hit enter and key in your password. When you login you will find the files that bleach bit made right on the home folder. You will be able to delete them instantly. I tried this on Linux mint 17.1 Cinnamon and it worked. On top of each window is the words 'Elevated privileges' highlighted in red.

This means you have full administration access (root access). To be on the safe side if you are not an advanced Linux user, it is best to lock the root account once again.

Go to MenuAdministrationLogin WindowOptions then deselect the 'Allow root login' option. Then logout of your current session or reboot. Login in again with your normal account.

Lost hd space after aborted free space erase.com

If you do not understand why Linux Mint does not allow login in as 'root' by default, check the Linux Mint/Ubuntu forum for answers. I hope this helps. or to post comments. Please check it is BleachBit version 1.8 by clicking Help - About. Since BleachBit version 1.2, the folder is located by default in /.cache instead of /.

Lost Hd Space After Aborted Free Space Erase.com

You can change this preference by either deleting /.config/bleachbit/bleachbit.ini (such as by clicking File - Shred Settings and Quit) or by changing the folder under the Preferences menu. In BleachBit 1.0 the folder could have many thousands of empty files, but since BleachBit 1.2 it will have only a few, large files.

In both cases the folder names and file names will have strange names. Linux is slow deleting thousands of files, so much that it can seem like it is not doing anything, so you really should not see this since BleachBit version 1.2. As you indicated, it may be a good idea to avoid the 'wipe free disk space' option unless you really need it.

As far as I understand, when I delete (without using Recycle Bin) a file, its record is removed from the file system table of contents (FAT/MFT/etc.) but the values of the disk sectors which were occupied by the file remain intact until these sectors are reused to write something else. When I use some sort of erased files recovery tool, it reads those sectors directly and tries to build up the original file. In this case, what I can't understand is why recovery tools are still able to find deleted files (with reduced chance of rebuilding them though) after I defragment the drive and overwrite all the free space with zeros. Can you explain this? I thought zero-overwritten deleted files can be only found by means of some special forensic lab magnetic scan hardware and those complex wiping algorithms (overwriting free space multiple times with random and non-random patterns) only make sense to prevent such a physical scan to succeed, but practically it seems that plain zero-fill is not enough to wipe all the tracks of deleted files.

How can this be? UPDATE, addressing the questions that came up:. I've tried the following wipe tools: Sysinternal's SDelete, CCLeaner, and a simple utility the name of which I can't remember which starts from command line and creates a growing zero-filled file until the whole free space is taken and then deletes it. I've tried the following recovery tools: Recuva, GetDataBack, R-Studio, EasyRecovery. I can't exactly remember which tools have given specific result (as far as I can remember trial versions of some of them only show files names and can't actually recover).

Probably in most (but not 100% all) cases they've only seen the names and could not recover the data, but this is still a security threat to be addressed as file names can still be pretty informative (for example I've seen a guy that stored passwords in text files which were named as the passworded resource name plus the login name, while login names should be secured too). It is not enough to delete the data and to format the hard drive(which deletes the adress tables).

This only removes the link to the data. For the data to be erased, new data must be written on top of it.

Just writing on top of the data once is not enough. This is why the more secure method of disk wiping writes different types of data to the disk multiple times. The more times new data is written onto the disk, the more secure it is. For more information read this: A really good program which lets you apply many different hard drive wipes is.

I notice you asked about the filenames staying behind, as well as the data; that's normal, no disk wiper will overwrite directory entries because the only way to do so is create and delete files in the containing directory until the old entry is overwritten. Depending on how fancy the filesystem is (ext4, ntfs, reiserfs, hfs+, others with non-linear directory structures) this may take multiple attempts.

Another possible suggestion for file data being recoverable on some filesystems is that it could be in the journal. Many disk free space wipe utilities wrote directly to the device, avoiding the filesystem; and a sufficiently smart journal might detect writing all zeroes into a file until it's full (more precisely, writing the same block of data multiple times) and only save it once, leaving other things in the journal still. And then some smart filesystems may stuff sufficiently small files into the filesystem's file metadata (inode in Unix filesystems) making them impossible for any kind of disk wipe to touch the data.