- Errol Garner
- Medlem ●
- 2004-06-12 04:17
Seeing is believing.
Jag håller dock med om att det är dåligt att man behöver tredjepartsprogramvara för att reparera "major errors" på diskar. Det vore bra om Apple löser det.
Jo det är klart efterlängat.
Hittade också följande om defragmentering som till viss del klargör en del och visar att Apple
"automagiska" system inte alltid är det bästa.
Från MacIntouch
Re: “Automatic Optimization” in Panther
David Badovinac
On Dec 3, Tommy Igoe asked: "This might be an excellent time to ask about this mysterious feature I keep running into on message boards about Panthers “Automatic Optimization” capability. Is this true, or a myth?"
From what I have been reading, the Darwin Gurus say that there are actually two separate file optimizations going on in Panther.
The first one is automatic file defragmentation. When a file is opened, if it is highly fragmented (ie. 8+ fragments) and the file is under 20MB in size, it will be automatically defragmented. This is accomplished by the file system just moving the file to a new location. This process only happens on Journaled HFS+ volumes.
The second optimization is called "Adaptive Hot File Clustering". In general, it works like this: over a period of 60 hours, the file system keeps track of files that are read frequently (for a file to be considered as a hot-file, it must be less than 10MB and never written to). At the end of this period, the "hottest" files (ie. the files that have been read the most times) are moved to the "hotband" of the disk (which is that part of the disk which is particularly fast given the physical characteristics of the disk).
The size of the "hotband" will depend on the size of the disk (ie. 5MB of hotband space for each GB of disk). "Cold" files that were in the hotband will be moved out of the hotband to make room for the hot files. As a side effect of being moved into the hotband, the hot files are defragmented.
Currently, Adaptive Hot File Clustering only works on the boot volume, and only for Journaled HFS+ volumes that are more than 10GB.
Tracy Valleau
In reply to the question about Panther defragmenting files: Yes, mostly.
First, journaling needs to be enabled for this to work (unlike third-party defragmenters, which requires that journaling be turned off during the process).
Next, it only automatically defrags files less than 20 megabytes, and which also have at least eight "extents" (a directory tracking mechanism) in the directory (indicating, generally speaking, that the file is pretty fragmented, and will require an "extents overflow" which will cause slower loading times.)
Consider it a minor tune up, and not a day at the garage. If you're constantly creating large files (such as video and audio) a true optimizer will free up more contiguous space than the automatic defragmenting in Panther... which will leave other files (which don't meet the criteria above) fragmented.
MacGuru
"In addition to ensuring that HFS+ volumes have sufficient free contiguous disk space for the disk directory to grow, disk optimizers are useful because they simplify the disk directory, causing all of the nodes in the Extents B-Tree to be free rather than used. A simplified disk directory is easier to repair or rebuild. One symptom of an excessively complex disk directory is an error messsage from Disk First Aid that the “hash table is full.” The hash table is created in RAM by Disk First Aid as it attempts to rebuild the disk directory. It is not a file on the disk itself.....
"Mac OS X Panther adds some automatic file optimization and file relocation features, but they are quite limited.