Diskomizer

Overview

Diskomizer is a program for testing and verifying disk subsystems. It uses multiple processes to do asynchronous writes and reads to devices that are specified and then verifies the data that is read back is the data that was written to that block. Every block of data has a unique header each time it is written and the body of the data changes every time the block is written. It can also be used to do read only testing of devices; providing a non destructive method of testing.

Diskomizer will find broken devices and paths to devices software bugs and latent faults in hardware. It does not break these devices, it simply finds faults that are already there. It knows nothing about the under lying storage devices, and hence can be used just as well to generate load on NFS file systems as any other device. Diskomizer can be run as an ordinary user as long as that user has permission to open the files and or devices that are being used.

Operation

The Diskomizer program uses asynchronous I/O to load a device with reads and optionally writes. The data it writes is check summed and it keeps a record of the data it wrote to each block so that it can confirm that the data it reads is the same as the data it wrote. It can check either by doing a full binary check or by just checking that the check sums agree. Additionally it can write SPARC, i86pc or amd64 instructions to the disk which after being read back into the shared memory it will execute.

When not in read-only mode the basic operation is each process uses asynchronous writes to set-up as many writes as you request to first fill the devices with known data and after it has written a percentage of each disk disk, (10% by default) it will begin to queue up the number of reads you have requested. The reads will be from random blocks of the disk that have already been written to, it then compares the data read with what was written and if the compare fails creates a file with the two blocks and the diffs in it. It also has the option to have a number of write-read threads that write a block and immediately reads the block back to verify it, See WRTHREADS. Once it has written to all the blocks on the device it will then start to write to blocks at random. It keeps a bit map for each device to lock access to the device blocks and a bit map to lock access to the shared read and write buffers. When it is run in multi processor mode the bit maps are held in memory that is shared and are protected by locks held in memory that is shared.

Load

Due to being multi-process, it can be configured to put enormous load on disk subsystems, even dedicated raid devices.

For example here is the iostat from a pair of A5200 with Diskomizer running. The throttle for these disks is 15, so the maximum possible number of active commands per disk is 15, as you can see all of these drives have averaged 15 commands queued in the last 5 seconds. You can always tell whether you have reached the maximum that a device can handle by looking at the "wait" column. If the wait column is non zero then the number of active commands must be at the maximum forcing the device driver to queue IO internally on the wait queue.

device    r/s  w/s   Mr/s   Mw/s wait actv  svc_t  %w  %b 
ssd0     48.7 45.5    0.4    0.4 135.8 15.0 1600.9 100 100 
ssd1     48.1 44.9    0.4    0.4 58.5 15.0  790.6 100 100 
ssd2     45.3 46.5    0.4    0.4 117.0 15.0 1437.7 100 100 
ssd3     42.5 38.9    0.3    0.3 98.0 15.0 1388.2 100 100 
ssd4     38.5 34.3    0.3    0.3 102.9 15.0 1618.5 100 100 
ssd5     37.9 35.5    0.3    0.3 72.9 15.0 1197.1 100 100 
ssd6     44.9 47.5    0.4    0.4 125.4 15.0 1518.8 100 100 
ssd7     50.1 44.3    0.4    0.3 103.3 15.0 1253.3 100 100 
ssd8     56.9 55.5    0.4    0.4 75.8 15.0  808.0  99 100 
ssd9     37.9 36.3    0.3    0.3 171.5 15.0 2512.3 100 100 
ssd10    45.9 43.5    0.4    0.3 107.3 15.0 1367.3 100 100 
ssd11    42.3 41.3    0.3    0.3 79.7 15.0 1132.7 100 100 
ssd12    50.3 53.5    0.4    0.4 145.9 15.0 1550.5 100 100 
ssd13    36.3 32.5    0.3    0.3 126.0 15.0 2047.8 100 100 
ssd14    50.5 46.7    0.4    0.4 115.2 15.0 1339.0 100 100 
ssd15    50.1 49.5    0.4    0.4 115.7 15.0 1312.2 100 100 
ssd16    45.5 44.1    0.4    0.3 103.4 15.0 1320.8 100 100 
ssd17    42.7 41.3    0.3    0.3 107.6 15.0 1459.3 100 100 
ssd18    40.7 37.9    0.3    0.3 121.4 15.0 1734.1 100 100 
ssd19    45.5 36.7    0.4    0.3 95.2 15.0 1339.4 100 100 
ssd20    46.7 48.9    0.4    0.4 162.7 15.0 1858.5 100 100 
ssd21    51.5 53.7    0.4    0.4 103.4 15.0 1125.4 100 100 
ssd22    46.9 42.1    0.4    0.3 105.1 15.0 1348.5 100 100 
ssd23    49.7 48.1    0.4    0.4 77.2 15.0  943.0 100 100 
ssd24    39.7 38.5    0.3    0.3 115.5 15.0 1668.3 100 100 
ssd25    50.7 44.3    0.4    0.3 156.6 15.0 1806.6 100 100 
ssd26    36.7 41.7    0.3    0.3 146.9 15.0 2063.3 100 100 
ssd27    43.1 39.7    0.3    0.3 142.0 15.0 1895.1 100 100 
ssd28    54.1 46.7    0.4    0.4 154.6 15.0 1682.3 100 100 
ssd29    46.7 43.3    0.4    0.3 93.1 15.0 1200.8 100 100 
ssd30    37.1 42.1    0.3    0.3 123.0 15.0 1741.1 100 100 
ssd31    42.7 42.7    0.3    0.3 128.0 15.0 1674.1 100 100 
ssd32    50.7 46.1    0.4    0.4 61.7 15.0  792.4 100 100 
ssd33    47.7 46.3    0.4    0.4 112.8 15.0 1359.0 100 100 
ssd34    49.9 45.9    0.4    0.4 142.9 15.0 1648.4 100 100 
ssd35    41.5 34.3    0.3    0.3 78.3 15.0 1230.7 100 100 
ssd36    48.5 43.1    0.4    0.3 119.2 15.0 1465.0 100 100 
ssd37    40.1 38.1    0.3    0.3 113.6 15.0 1643.1 100 100 
ssd38    40.7 35.5    0.3    0.3 111.3 15.0 1656.8 100 100 
ssd39    42.5 40.9    0.3    0.3 126.5 15.0 1696.3 100 100 
ssd40    53.3 53.1    0.4    0.4 107.2 15.0 1148.9 100 100 
ssd41    49.7 47.3    0.4    0.4 103.1 15.0 1217.1 100 100 
ssd42    47.1 48.3    0.4    0.4 127.7 15.0 1495.7 100 100 
ssd43    43.5 41.9    0.3    0.3 60.1 15.0  879.1 100 100 
ssd44    50.5 51.1    0.4    0.4 97.6 15.0 1108.7 100 100 
ssd45    50.1 45.7    0.4    0.4 111.7 15.0 1322.5 100 100 
ssd46    52.7 52.7    0.4    0.4 128.9 15.0 1365.2 100 100 
ssd47    57.9 51.1    0.5    0.4 115.0 15.0 1192.8 100 100 
ssd48    54.3 48.1    0.4    0.4 89.5 15.0 1020.7 100 100 
ssd49    50.3 49.3    0.4    0.4 106.5 15.0 1220.4 100 100 
ssd50    45.7 43.9    0.4    0.3 86.4 15.0 1131.1 100 100 
ssd51    47.1 45.7    0.4    0.4 85.5 15.0 1083.0 100 100 
ssd52    58.7 56.1    0.5    0.4 115.9 15.0 1140.5 100 100 
ssd53    57.1 53.3    0.4    0.4 87.9 15.0  932.2 100 100 
ssd54    43.5 38.7    0.3    0.3 102.5 15.0 1429.6 100 100 
ssd55    50.7 48.7    0.4    0.4 104.7 15.0 1204.0 100 100 
ssd56    51.7 45.3    0.4    0.4 76.1 15.0  938.9 100 100 
ssd57    54.9 51.3    0.4    0.4 80.8 15.0  902.5 100 100 
ssd58    46.7 46.5    0.4    0.4 86.5 15.0 1089.5 100 100 
ssd59    48.5 41.1    0.4    0.3 114.6 15.0 1446.4 100 100 
ssd60    49.3 46.5    0.4    0.4 106.1 15.0 1264.1 100 100 
ssd61    54.1 52.7    0.4    0.4 104.3 15.0 1117.0 100 100 
ssd62    46.7 43.7    0.4    0.3 106.0 15.0 1338.4 100 100 
ssd63    42.7 42.9    0.3    0.3 111.3 15.0 1474.5 100 100 
ssd64    42.3 39.7    0.3    0.3 133.9 15.0 1814.7 100 100 
ssd65    38.7 37.3    0.3    0.3 114.6 15.0 1704.4 100 100 
ssd66    46.7 44.5    0.4    0.3 116.0 15.0 1436.5 100 100 
ssd67    48.7 43.9    0.4    0.3 96.9 15.0 1208.4 100 100 
ssd68    55.5 52.1    0.4    0.4 105.2 15.0 1116.9  97 100 
ssd69    48.9 48.9    0.4    0.4 107.2 15.0 1250.0 100 100 
ssd70    46.3 42.1    0.4    0.3 107.6 15.0 1386.4 100 100 
ssd71    46.3 46.7    0.4    0.4 76.3 15.0  981.1 100 100 
ssd72    44.5 40.7    0.3    0.3 97.3 15.0 1317.6 100 100 
ssd73    51.5 51.3    0.4    0.4 115.1 15.0 1265.6 100 100 
ssd74    54.3 46.9    0.4    0.4 99.6 15.0 1132.9 100 100 
ssd75    50.3 45.5    0.4    0.4 130.8 15.0 1522.3 100 100 
ssd76    46.1 44.5    0.4    0.3 132.6 15.0 1628.5 100 100 
ssd77    46.9 39.1    0.4    0.3 91.3 15.0 1236.0 100 100 
ssd78    52.9 52.1    0.4    0.4 102.9 15.0 1123.5 100 100 
ssd79    52.7 51.5    0.4    0.4 114.1 15.0 1239.4 100 100 
ssd80    45.5 47.3    0.4    0.4 104.4 15.0 1286.3 100 100 
ssd81    35.5 41.5    0.3    0.3 148.4 15.0 2121.1 100 100 
ssd82    59.9 56.3    0.5    0.4 104.8 15.0 1031.1 100 100 
ssd83    49.7 46.3    0.4    0.4 126.0 15.0 1468.8 100 100 
ssd84    49.3 48.1    0.4    0.4 104.1 15.0 1222.7 100 100 
ssd85    52.3 46.7    0.4    0.4 104.0 15.0 1201.7 100 100 
ssd86    51.7 50.3    0.4    0.4 92.6 15.0 1054.7 100 100 
ssd87    50.5 50.7    0.4    0.4 105.9 15.0 1195.1 100 100 

In the above the A5200s are connected to the system via 2 loops and both loops are being used for I/O, by Diskomizer.

I/O Models

The asynchronous I/O model that Diskomizer uses is loaded from a shared library at run time. Currently there are five different models available in the Diskomizer package, more are planned, but will be shipped as separate packages to reduce the testing overhead of all the different models.

I/O Model

Comments

SUNOS

This is the traditional SunOS asynchronous I/O model using aiowrite(), aioread() and aiowait().

POSIX

This uses the POSIX asynchronous I/O model, using aio_write(), aio_read(), aio_error() and aio_return().

PREAD

This uses POSIX threads to issue pread() and pwrite() system calls asynchronously to the main thread.

FS

This uses POSIX threads to get asynchronous behavior but then stores the data in multiple files in a directory, or as attributes of the directory. (See fsattr(5)). This overcomes the single per file writer lock that many file systems have as there are now many files.

USCSI

This uses POSIX threads to get asynchronous behaviour and then uses uscsi(7I) to issue the IO.

The SUNOS I/O model is the most efficient for Diskomizer and is the default, then the POSIX model. This is because of the Diskomizer implementation, it is not necessarily a reflection of the underlying programming models. The DAIO implementation of the POSIX model suffers on pre Solaris 8 releases, where there is no single tier threads library as it relies on signals for notification of completion of IO. This is most inefficient in the two tier threads library and results in very poor performance compared with the SUNOS model.

Choosing IO model

If you wish to exercise raw devices then the SUNOS model is the most efficient with the POSIX model a close second. For file system testing the FS model has the greatest potential, as it has no limits on the number of threads that it will use; however this can lead to very large numbers of threads running and can reduce the impact of any per file write locks that the file system may have.

The IO model is selected using the AIO_ROUTINES option.

Read only operation

If the option O_RDONLY is set, then Diskomizer will open all the devices and files read only. In this mode no data is ever written to the devices, so this test is non-destructive. In this mode, rather confusingly, the write threads do not write any data, but instead read blocks and note their checksum so the subsequent reads can verify that the check sums are unchanged. If there are differences the error is reported but no diff file is created as the old data is not available to produce the diff.

Memory Allocator

Diskomizer is very memory intensive. In addition to the memory required for the buffers to do I/O to and from, it also has to store some data about each block on the devices so that when s block is read back it knows what it wrote to that block and can check that the data is correct. For even a moderate number of drives, if you are testing the whole drive this turns out to be a lot of memory. The 32-bit Diskomizer keeps 28 bytes of data per Diskomizer disk block. So if you want to Diskomizer a 9G drive using 2K blocks you have 4.5M blocks, which in turn gives 126M of shared memory just to hold the data about the blocks. There is also the bit maps, so you have 576K for the bit map giving about 127M for the 9 G drive. This does not include the space being used as the I/O buffers, which tends to be insignificant unless doing enormous I/O sizes.

Address Space Limitations

It should be clear that the 32-bit Diskomizer will only have enough address space to hold data for at the very most 32 9G disks doing 2K I/O's, and that assumes that there is nothing else in the address space, which there clearly is. Additionally there are various resource limitations that can be configured on the system, that will restrict the number and or size of individual memory segments further so that even when using the 64-bit Diskomizer it is not always possible to have all the memory required mapped at the same time.

Diskomizer works around these issues by allowing certain memory segments to be detached and attached on demand. In a future release all shared memory segments may be detachable.

Descriptions

Here is a brief description of each of the memory allocators. All the shared memory used by Diskomizer is allocated at start up time but attached to at run time.

SHM

The SHM shared memory allocator uses System V shared memory obtained with shmget(2) and attached using shmat(2).

When Diskomizer needs to allocate a chunk of shared memory it searches the shared memory segments that are already allocated for a chunk of memory that is free, large enough and of the same type (there are two types memory that can be detached and memory that can not be attached). If it finds a large enough free chunk of memory then it uses enough memory from that chunk as it needs. If it can not find a large enough chunk then it allocates a new block of shared memory using shmget with the maximum size that it can (configured by the option SHMINFO_SHMMAX) and uses as much of that block of memory as it needs.

At run time when it needs to access a chunk of shared memory, it finds which block of shared memory the chunk that it needs is in and if that block is not currently attached it attempts to attach the whole block. If the attach fails then it finds the least recently used block of shared memory that is not in use and detaches that and tries the attach again. This continues until either the memory is attached or all the free memory is detached. If after detaching all the free memory the new attach still does not succeed then Diskomizer will exit with an error.

ISM

The ISM shared memory allocator is identical to the SHM memory allocator except when shmget(2) and shmat(2) are called the SHM_SHARE_MMU flag to get "Intimate" shared memory.

MMAP

The MMAP shared memory allocator uses mmap(2) from /dev/zero for memory that can not be detached and from a file that it creates in the directory given by the EXPERT_MMAP_FILE_DIRECTORY option for memory that can be detached. If not using /dev/zero the file is immediately unlinked so unless you know where to look you will never see it, but it will use up space.

When Diskomizer needs to allocate a chunk of shared memory it searches the mapped files for a chunk of memory that is large enough and if there is enough space it uses that. If there is not enough space in the existing files it will ftruncate(3c) the last file that was created to be (100 * 1024 * sysconf(_SC_PAGESIZE)) bigger and continues doing this until there is enough space or the file reaches it's maximum size.

At run time when Diskomizer needs to attach a chunk of shared memory that is not currently mapped it finds the file and offset for that memory and then uses mmap(2) to map the pages relating to that memory. Unlike the SHM and ISM memory allocators it only maps the memory that it needs and not the whole file. If the mmap fails then it unmaps the least recently used area of memory that is free, then tries the mmap again, it repeats this until either all the memory mappings for free memory have been removed or the mmap of the new segment has succeeded.

BEST_SHM

The BEST_SHM allocator is a derived allocator that uses the ISM and SHM allocators to allocate and attach to memory, trying the ISM allocator first and if that fails with ENOMEM tries SHM before attempting to detach any shared memory.

BEST

The BEST allocator is a derived allocator which used the BEST_SHM and MMAP allocators to allocate and attach to shared memory. When it is unable to attach a chunk of memory it first tries detaching memory segments of the same type as the one that it is trying to attach, before detaching memory segments of the other type.

Which Shared Memory Allocator should I use?

If all you want to do is exercise the disks then either leave the memory allocator to the default or use the MMAP allocator. If you wish to simulate the behaviour of an RBMS then you should use the ISM allocator, bearing in mind that you will have to configure the systems shared memory parameters in /etc/system and also pass the value of SHMINFO_SHMMAXs to Diskomizer so that Diskomizer knows what the maximum size of shared memory segment that it creates is.

Buffer Headers

Every buffer that is written o the device or file has a unique buffer header that contains information required for Diskomizer to track errors. The information stored in the header is as follows:

  1. The device identifier. Which consists of the inode number of the device and the device id as given by fstat().

  2. The type of the buffer. This is a 32 bit mask that describes what the buffer contains,

  3. The buffer header's checksum.

  4. The buffer data's checksum.

  5. The length of the I/O.

  6. The offset in the device that the I/O was done to.

  7. The process ID of the master diskomizer process.

  8. The serial number and hardware provider of the system which is running diskomizer.

So that the same data is not written to the same part of the disk over and over again there are actually 2 types of buffer headers, type 'A' and type 'B'. Type 'A' headers have the 64 bit value 0xAAAAAAAAAAAAAAAA as the first 8 bytes before and after the header. Type 'B' has the 64 bit value 0x5555555555555555. The definitions, offsets and sizes of the various elements are printed out when Diskomizer starts and also above each entry that is written to a diffs file.

Data Patterns

Diskomizer's sequential data is a sequence of 251 elements starting from 0, 1, 2, 3 or 4. Each sequence is used in turn. So there a five sequences 0-250, 1-251, 2-252, 3,-253, 4-254 and each of these sequences will start at a different offset within a 256 byte block, The first sequence will start at offset 0 then next at 251 and the next at 502 etc. So even though the data is sequential it repeats very rarely and from different byte offsets each time. The whole pattern only repeats every 305005 bytes rather than every 256 bytes that you would get with the simpler pattern.

The reverse sequential pattern is the same but in reverse, the sequence counts down from 255, 254, 253, 252, or 251.

The random pattern is as random as lrand() can give.

Support for the ISI and CJT killer patterns which are designed to cause fiber channel communications to have difficulties.

You can also supply a binary file containing a pattern which will be loaded as many times as needed to fill the buffer by using the USERPAT option and EXPERT_USERPAT_FILE option to specify the file.

Read Buffer data patterns

Prior to reads being submitted; diskomizer initializes the buffer into which it is reading to make any failure to copy data easier to detect. The pattern used is controlled by the options READ_BUFFER_INIT and READ_BUFFER_SUPPLIED_VALUE. The default is to repeat the 32-bit pattern 0xfeedbede over the whole read buffer

Path Checking

During start up, Diskomizer writes a unique identifier to the same known block on each device after first zeroing that block. It then reads the block back via all the paths to each device and verifies that they match. This will find errors in the configuration where two paths to the same device are specified as separate devices. It will not however find situations where you have over lapping partitions.

I/O clustering.

Once it begins to do "random" I/O, it can cluster the blocks being written to simulate read ahead, and writes of sequential disk blocks. The size of these I/O clusters are controlled by the options EXPERT_READ_CLUSTER_LENGTH and EXPERT_WRITE_CLUSTER_LENGTH

Idling devices

Diskomizer can idle devices for periods of time. This can allow devices that do "house keeping" when idle to start doing this. These delays are controlled by the options EXPERT_MAX_ACTIVE_TIME, EXPERT_MIN_ACTIVE_TIME. EXPERT_MAX_IDLE_TIME and EXPERT_MIN_IDLE_TIME. These same options can also be used to make Diskomizer only load drives during non peak times.

The implementation of this feature is a state machine with four states:

  1. STOPPED - There are no I/O's queued to this device or file.

  2. STARTING - The device has some I/O's queued but is not yet up to the maximum number.

  3. RUNNING - The maximum number of I/O's are queued to this device, any I/O that returns causes a new I/O to be queued immediately.

  4. STOPPING - There are I/O's queued to the device but when the I/O's return they will cause new I/O's to be submitted until the device has STOPPED and the idle time has passed.

TNF probes.

Diskomizer contains a number of TNF probe points. Some are for debugging Diskomizer itself and some are for debugging problems.  The main probe points are "aiowrite" and "aioread" which are just before the calls to aiowrite and aioread respectively,  "handle_write" and "handle_read" which are in the routines that process the data returned by aiowait depending on whether the I/O was a write or a read. To enable these probes, in prex you can do:

prex> list probes sunw%cte%diskomizer%aio = /.*/
name=aioread enable=off trace=on file=diskomizer64mpism.c line=1033 funcs=<no value>
name=aiowrite enable=off trace=on file=diskomizer64mpism.c line=1142 funcs=<no value>
name=handle_read enable=off trace=on file=diskomizer64mpism.c line=1636 funcs=<no value>
name=handle_write enable=off trace=on file=diskomizer64mpism.c line=1703 funcs=<no value>
prex> enable sunw%cte%diskomizer%aio = /.*/
prex> list probes sunw%cte%diskomizer%aio = /.*/
name=aioread enable=on trace=on file=diskomizer64mpism.c line=1033 funcs=<no value>
name=aiowrite enable=on trace=on file=diskomizer64mpism.c line=1142 funcs=<no value>
name=handle_read enable=on trace=on file=diskomizer64mpism.c line=1636 funcs=<no value>
name=handle_write enable=on trace=on file=diskomizer64mpism.c line=1703 funcs=<no value>
prex>

Or you can just select the read or write probes like this:

prex> list probes sunw%cte%diskomizer%aio = read
name=aioread enable=on trace=on file=diskomizer64mpism.c line=1033 funcs=<no value>
name=handle_read enable=on trace=on file=diskomizer64mpism.c line=1636 funcs=<no value>
prex>

Options

Diskomizer has a large number of different options, most of which need never be changed. To help the user the options are grouped into four different types:

  1. General

    These have no prefix and I expect users will often want to change these values. They should be well understood after reading the documentation.

  2. Expert

    These are all prefixed "EXPERT_". These options exist for expert user's of Diskomizer. Often the effects of changing these values are surprising and should not be needed.

  3. Obscure

    These are all prefixed "OBSCURE_". These are options that change obscure values and or to control obscure tests. Generally these should not be touched.

  4. Debug

    These are all prefixed "DEBUG_". These are only left over from the development, testing and debugging of Diskomizer. I would never expect you to change these.

If an option is supplied that is not understood by diskomizer or one of the shared objects diskomizer is using then this is treated as a fatal error.


Performance considerations.

There are 2 things that limit the number of I/O's that Diskomizer can keep in the kernel, CPU and memory.

  1. CPU

    The routine that uses most of the CPU is the check summing routine. It takes the optimised Diskomizer 5 instructions per byte just to calculate the check sum, the unoptimised version takes more than twice as many. So with an 8K block size you are doing 8192 * 5 instructions just calculating the checksum, for each I/O. Hence the number and speed of your CPUs will form the upper bound on the amount of data that Diskomizer will keep in transit to the disks. The size of your buffers will also play a part in limiting the number of I/Os that Diskomizer can keep in the kernel, the larger the buffers the longer it takes to check them, so while you may more data going to and from the device, the number of I/O's will be less.

  2. Memory, virtual and physical.

    As noted above, Diskomizer uses memory. If your configuration needs more memory than there is physically in the system, then you will notice a performance degradation, particularly if you use the BEST memory allocator and have configured a large number or very large number of shared memory segments into the system. Since intimate shared memory is not pageable by the kernel all the shared memory used by Diskomizer ends up locked in memory, leaving all the other memory needs of the system to be satisfied by the remaining memory. If the amount of remaining memory is small then the kernel will thrash paging data in and out.

Multi Terabyte Support

Diskomizer knows how to read EFI labels on devices and therefore can be used on devices greater than 1TB in size.

Due to a bug in the early EFI label code in Solaris, where the version number of the label was incorrectly set to 10002, efi labels made with that version of format can not be read by diskomizer unless the optiion OBSCURE_ALLOW_EFI_102 is set.

It can also read the new “plus1tb vtoc”.

Usage Tracking

By default if your domainname as returned by the domainname(1M) command ends with “.sun.com” diskomizer will send usage tracking data back. The data is also written into a file in the current working directory called: “usage_tracking.xml”. No personal data is sent.

Implementation Notes

  1. BEST memory allocator.

    The "BEST" memory allocator allocates shared memory in what Diskomizer thinks will be the fastest way which is most likely to succeed. Currently this means that the buffers that are used for I/O are always allocated with mmap from /dev/zero as these can not be detached and the default value of shmsys:shminfo_shmmni is too low for Diskomizer to be sure of not live locking while attaching shared memory.

  2. Reporting all I/O's to be stopped.

    Diskomizer is a victim of bug 4162491 which results in the 32-bit Diskomizer reporting that devices will stop on Dec 14 1901 in timezones east of GMT, it is quite safe to ignore these messages. This is not a Y2K issue.

  3. Variable I/O sizes.

    The implementation of variable I/O sizes is an interim measure. The device is broken up into logical blocks that are the size of the largest I/O that you are doing. All smaller I/O's are then done only to the start of this larger logical block. So if you are doing 8k and 1K I/O's, all the 1k I/O will be to the first 1k of the 8k blocks.