This is an updated version of the CBF draft proposal, following the Brookhaven imgCIF workshop. (This proposes a detailed syntax for the multiple pseudo-ASCII header sections and binary sections structure as discussed at the Brookhaven workshop.
Note: The present CBFlib is experimenting with a "binary string" data value type, instead of the separated header section / binary section concept described here. Some comments have been added as to possible changes. This approach will be evaluated, and maybe replaced the previous idea. This should be regarded as experimental.)
Here's my attempt to define a CBF format, please excuse any inconsistencies and incompleteness.
I separate the definition from comments on discussion items by using round brackets to refer to notes kept separate from the main text e.g. (1) refers to point 1 in the notes section. I suggest that I try to up-date and redistribute this definition as new suggestions are sent or consensus is reached.
NOTE:
The initial aim is to support efficient storage of raw experimental data from area-detectors (images) with no loss of information compared to existing formats. The format should be both efficient in terms of writing and reading speeds, and in terms of stored file sizes, and should be simple enough to be easily coded, or ported to new computer systems.
Flexibility and extensibility are required, and later the storage of other forms of data may be added without affecting the present definitions.
The aims are achieved by a simple binary file format, consisting of a variable length header section followed by binary data sections. The binary data is fully described by CIF data name / value pairs within the header sections. The header sections may also contain other auxiliary information.
The present version of the format only tries to deal with simple Cartesian data. This is essentially the "raw" data from detectors that is typically stored in commercial formats or individual formats internal to particular institutes, but could be other forms of data. It is hoped that CBF can replace individual laboratory or institute formats for "home" built detector systems, be used as a inter-program data exchange format, and may be offered as an output choice by a number of commercial detector manufacturers specialising in X-ray and other detector systems.
This format does not imply any particular demands on processing software nor on the manner in which such software should work. Definitions of units, coordinate systems, etc. may quite different. The clear precise definitions within CIF, and hence CBF, help, when necessary, to convert from one system to another. Whilst no strict demands are made, it is clearly to be hoped that software will make as much use as is reasonable of information relevant to the processing which is stored within the file. It is expected that processing software will give clear and informative error messages when they encounter problems within CBF's or do not support necessary mechanisms for inputting a file.
imgCIF is the name of the CIF dictionary which contains the terms specific to describing the binary data. Thus a CBF uses data names from the imgCIF and other CIF dictionaries.
(Translation programs will be written which will allow the whole of the CBF to be converted to CIFs which contain ASCII encoding of the binary data. These too will use the imgCIF dictionary terms.)
The example is an image of 768 by 512 pixels stored as 16 bit unsigned integers, in little endian byte order. (This is the native byte ordering on a PC.) The pixel sizes are 100.5 by 99.5 microns. Comment lines starting with a hash sign (#) are used to explain the contents of the header. Only the ASCII part of the file is shown, but comments are used to describe the start of the binary section.
First the file is shown with the minimum of comments that a typical outputting program might add. Then it is repeated, but with "over- commenting" to explain the format.
Here is how a file might appear if listed on a PC or on a Unix system using 'more':
(Note: under the CBFlib scheme the '###_START_OF_HEADER' and '###_END_OF_HEADER' identifiers become meaningless. The '###_START_OF_BIN' and '###_END_OF_BINARY' identifier disappear, but have equivalents within the "binary string" value.)
###_CRYSTALLOGRAPHIC_BINARY_FILE: VERSION 1.0 ###_START_OF_HEADER # Data block for image 1 data_image_1 _entry.id 'image_1' # Sample details _chemical.entry_id 'image_1' _chemical.name_common 'Protein X' # Experimental details _exptl_crystal.id 'CX-1A' _exptl_crystal.colour 'pale yellow' _diffrn.id DS1 _diffrn.crystal_id 'CX-1A' _diffrn_measurement.diffrn_id DS1 _diffrn_measurement.method Oscillation _diffrn_measurement.sample_detector_distance 0.15 _diffrn_radiation_wavelength.id L1 _diffrn_radiation_wavelength.wavelength 0.7653 _diffrn_radiation_wavelength.wt 1.0 _diffrn_radiation.diffrn_id DS1 _diffrn_radiation.wavelength_id L1 _diffrn_source.diffrn_id DS1 _diffrn_source.source synchrotron _diffrn_source.type 'ESRF BM-14' _diffrn_detector.diffrn_id DS1 _diffrn_detector.id ESRFCCD1 _diffrn_detector.detector CCD _diffrn_detector.type 'ESRF Be XRII/CCD' _diffrn_detector_element.id 1 _diffrn_detector_element.detector_id ESRFCCD1 _diffrn_frame_data.id F1 _diffrn_frame_data.detector_element_id 1 _diffrn_frame_data.array_id 'image_1' _diffrn_frame_data.binary_id 1 # Define image storage mechanism loop_ _array_structure.array_id _array_structure.binary_id _array_structure.encoding_type _array_structure.compression_type _array_structure.byte_order image_1 1 unsigned_16_bit_integer none little_endian loop_ _array_intensities.array_id _array_intensities.linearity _array_intensities.undefined_value _array_intensities.overload_value image_1 linear 0 65535 # Define dimensionality and element rastering loop_ _array_structure_list.array_id _array_structure_list.index _array_structure_list.dimension _array_structure_list.precedence _array_structure_list.direction image_1 1 768 1 increasing image_1 2 512 2 decreasing loop_ _array_element_size.array_id _array_element_size.index _array_element_size.size image_1 1 100.5e-6 image_1 2 99.5e-6 ###_END_OF_HEADER ###_START_OF_BIN ###_END_OF_BINARY ###_END_OF_CBFHere the file header is shown again, but this time with many comment lines added to explain the format:
###_CRYSTALLOGRAPHIC_BINARY_FILE: VERSION 1.0 # This line starting with a '#' is a CIF and CBF comment line, # but the first line with the three '#'s is a CBF identifier. # The text '###_CRYSTALLOGRAPHIC_BINARY_FILE: VERSION' identifiers # the file as a CBF and must be present as the very first line of # every CBF file. Following 'VERSION' is the version number of the # file. A version 1.0 CIF should be readable by any program which # fully supports the version 1.0 CBF definitions. # Comment lines and white space (blanks and new lines) may appear # anywhere outside the binary sections. ###_START_OF_HEADER # The '###_START_OF_HEADER' identifier defines the start of an ASCII # header section. This where the details of the image and auxiliary # information are defined. # Data block for image 1 data_image_1 # 'data_' defines the start of a CIF (and CBF) data block. We've # chosen to call this data block 'image_1', but this was an arbitary # choice. Within a data block a data item may only be used once. _entry.id 'image_1' # Sample details _chemical.entry_id 'image_1' _chemical.name_common 'Protein X' # The apostrophes enclose the string which contains a space. # Experimental details _exptl_crystal.id 'CX-1A' _exptl_crystal.colour 'pale yellow' _diffrn.id DS1 _diffrn.crystal_id 'CX-1A' _diffrn_measurement.diffrn_id DS1 _diffrn_measurement.method Oscillation _diffrn_measurement.sample_detector_distance 0.15 _diffrn_radiation_wavelength.id L1 _diffrn_radiation_wavelength.wavelength 0.7653 _diffrn_radiation_wavelength.wt 1.0 _diffrn_radiation.diffrn_id DS1 _diffrn_radiation.wavelength_id L1 _diffrn_source.diffrn_id DS1 _diffrn_source.source synchrotron _diffrn_source.type 'ESRF BM-14' _diffrn_detector.diffrn_id DS1 _diffrn_detector.id ESRFCCD1 _diffrn_detector.detector CCD _diffrn_detector.type 'ESRF Be XRII/CCD' _diffrn_detector_element.id 1 _diffrn_detector_element.detector_id ESRFCCD1 _diffrn_frame_data.id F1 _diffrn_frame_data.detector_element_id 1 _diffrn_frame_data.array_id 'image_1' _diffrn_frame_data.binary_id 1 # Many more data items can be defined, but the above gives the idea # of a useful minimum set (but not minimum in the sense of compulsory, # the above data items are optional in a CIF or CBF). # Define image storage mechanism loop_ _array_structure.array_id _array_structure.binary_id _array_structure.encoding_type _array_structure.compression_type _array_structure.byte_order image_1 1 unsigned_16_bit_integer none little_endian loop_ _array_intensities.array_id _array_intensities.linearity _array_intensities.undefined_value _array_intensities.overload_value image_1 linear 0 65535 # Define dimensionality and element rastering # Here the size of the image and the ordering (rastering) of the # data elements is defined. The CIF 'loop_' structure is used to # define different dimensions. (It can be used for defining multiple # images.) loop_ _array_structure_list.array_id _array_structure_list.index _array_structure_list.dimension _array_structure_list.precedence _array_structure_list.direction image_1 1 768 1 increasing image_1 2 512 2 decreasing loop_ _array_element_size.array_id _array_element_size.index _array_element_size.size image_1 1 100.5e-6 image_1 2 99.5e-6 # The 'array_id' identifies data items belong to the same array. Here # we have chosen the name 'image_1', but another name could have been # used, so long as it's used consistently. The 'index' component refers # to the dimension being defined, and the 'dimension' component defines # the number of elements in that dimension. The 'precedence' component # defines which precedence of rastering of the data. In this case the # first dimension is the faster changing dimension. The 'direction' # component tells us the direction in which the data rasters within a # dimension. Here the data rasters faster from minimum elements towards # the maximum element ('increasing') in the first dimension, and more # slowly from the maximum element towards the minimum element in the # second dimension. (This is the default rastering order.) # The storage of the binary data is now fully defined. # Further data items could be defined, but this header ends with the # '###_END_OF_HEADER' identifer. ###_END_OF_HEADER # Here comments or white space may be added e.g. to pad out the header # so that the start of the binary data is on a word boundary # The '###_START_OF_BIN' identifier is in fact 32 bytes long and contains # bytes to separate the "ASCII" lines from the binary data, bytes to # try to stop the listing of the header, bytes which define the binary # identifier which should be set to 1 to match the 'binary_id' defined # in the header, and bytes which define the length of the binary # section. In this case the length of the binary section is simply # 768*512*2 = 786432 bytes (or more, if for some reason the binary # section is made delibrately bigger than the binary data stored). ###_START_OF_BIN ###_END_OF_BINARY # The '###_END_OF_BINARY' identifier must occur starting at the first # byte after the number of bytes defined in the start of binary identifier. # This may be used to check data integrity. (Following the '###_END_OF_BINARY' # identifier the file is in "ASCII" mode again, so these comment lines # are allowed.) # The '###_END_OF_CBF' identifier signals the end of the CBF file. ###_END_OF_CBF
###_CRYSTALLOGRAPHIC_BINARY_FILE: VERSIONwhich must always be present so that a program can easily identify whether or not a file is a CBF, by simply inputting the first 41 characters. (The space is a blank (ASCII 32) and not a tab. All identifier characters are uppercase only.)
The first hash means that this line within a CIF would be a comment line, but the three hashes mean that this is a line describing the binary file layout for CBF. (All CBF internal identifiers start with the three hashes, and all other must immediately follow a "line separator".) No whitespace may precede the first hash sign.
Following the file identifier is the version number of the file. e.g. the full line might appear as:
###_CRYSTALLOGRAPHIC_BINARY_FILE: VERSION 1.0The version number must be separated from the file identifier characters by whitespace e.g. a blank (ASCII 32).
The version number is defined as a major version number and minor version number separated by the decimal point. A change in the major version may well mean that a program for the previous version cannot input the new version as some major change has occurred to CBF (3). A change in the minor version may also mean incompatibility, if the CBF has been written using some new feature. e.g. a new form of linearity scaling may be specified and this would be considered a minor version change. A file containing the new feature would not be readable by a program supporting only an older version of the format.
###_START_OF_HEADERfollowed by the carriage return, line-feed pair. (Another carriage return, line-feed pair immediately precedes this and all other CBF identifiers, with the exception of the CBF file identifier which is at the very start of the file.)
e.g.
Any CIF data name may occur within the header section.
###_END_OF_HEADERfollowed by carriage return, line-feed.
(Under CBFlib binary sections would be replaced by "binary string" values within a data name/value pair. The structure of the proposed "binary string" is similar to the binary sections described here.)
The ASCII part is:
###_START_OF_BINThe full identifier is:
Byte No. ASCII Symbol Byte Value (unsigned) (decimal) ------------ 1 # 35 2 # 35 3 # 35 4 _ 95 5 S 83 6 T 84 7 A 65 8 R 82 9 T 84 10 _ 95 11 O 79 12 F 70 13 _ 95 14 B 66 15 I 73 16 N 78 17 Form-feed 12 18 Substitute (Control-Z) 26 19 End of Transmission (Control-D) 04 20 213 21 } Bytes 21 - 24 define the binary section 22 } identifier. This a 32-bit unsigned little- 23 } endian integer. The number is used to relate 24 } data defined in the header section. 25 } 26 } Bytes 25 - 32 define the length of 27 } the following binary section in bytes 28 } as a 64-bit unsigned little-endian 29 } integer. (The value 0 means the 30 } size is unknown, and no other 31 } pseudo-ASCII nor binary sections may 32 } follow.)
The binary characters serve specific purposes:
This value may be set to zero if this is the last binary section or header section in the file. This allows a program writing, for example, a single compressed image to avoid having to rewind the file to write the size of the compressed data. (For small files compression within memory may be practical, and this may not be an issue. However very large files exist where writing the compressed data "on the fly" may be the only realistic method.) It is however recommended that this value be set, as it permits concatenation of files.
Since the data may have been compressed, knowing the numbers of elements and size of each element does not necessarily tell a program how many bytes to jump over, so here it is stored explicitly. This also means that the reading program does not have to decode information in the header section to move through the file.
###_END_OF_BINARYand followed by the carriage return / line feed pair.
The first "line separator" separates the binary data from the pseudo-ASCII line.
This identifier is in a sense redundant since the binary section length value tells the a program how many bytes to jump over to the end of the binary section. However, this redundancy has been deliberately added for error checking, and for possible file recovery in the case of a corrupted file.
This identifier must be present at the end of every binary section, including sections whose length has not been exclicitly defined within the file.
However, in general no guarantee is made of block nor word alignment in a CBF of unknown origin.
###_END_OF_CBFidentifier (including the carriage return, line-feed pair)
The binary identifier values used within a header section, and hence the immediately following binary section(s) must be unique.
A different header section may reuse binary identifier values.
(This allows concatenation of files without re-numbering the binary identifiers, and provides a certain level of localization of data within the file, to avoid programs having to search potentially huge files for missing binary sections.)
This shows the structuring of a simple example e.g. one header section followed by one binary section. Such as could be used to store a single image.
###_CRYSTALLOGRAPHIC_BINARY_FILE: VERSION 1.0 ###_START_OF_HEADER data_ ###_END_OF_HEADER ###_START_OF_BIN ###_END_OF_BINARY ###_END_OF_CBF
This shows the a possible structuring of a more complicated example. Two header sections, the first contains two data blocks and defines three binary sections. CIF comment lines, starting with a hash (#) are used to example the structure.
###_CRYSTALLOGRAPHIC_BINARY_FILE: VERSION 1.0 # A comment cannot appear before the file identifier, but can appear # anywhere else, except within the binary sections. ###_START_OF_HEADER # Here the first data block starts data_ # The 'data_' identifier finishes the first data block and starts the # second data_ ###_END_OF_HEADER # The first header section is finished, but the first binary section # does not start until the 'start of binary' identifier is found. This # part of the file is still pseudo-ASCII. ###_START_OF_BIN ###_END_OF_BINARY # Following the 'end of binary' identifier the file is pseudo-ASCII # again, so comments are valid up to the next 'start of binary' # identifier. # Second binary section. ###_START_OF_BIN ###_END_OF_BINARY # Third binary section. ###_START_OF_BIN ###_END_OF_BINARY # Second Header section ###_START_OF_HEADER data_ ###_END_OF_HEADER # Since this the last binary section in the file, the byte length could # optionally be set to zero, which indicates it is undefined. (All the # other binary sections must have these values defined to allow the # reader software to jump over sections.) ###_START_OF_BIN ###_END_OF_BINARY ###_END_OF_CBF
The _array_* categories cover all data names concerned with the storage of images or regular array data.
Data names from any of the existing categories may be relevant as auxiliary information in the header section, but data names from the _diffrn_ category, are likely to be the most relevant, and a number of new data names in this category are necessary.
The "array" class is defined by data names from the ARRAY_STRUCTURE and ARRAY_STRUCTURE_LIST categories.
Here is a short summary of the data names and their purposes.
e.g. 'unsigned_16_bit_integer' is used if the stored image was 16 bit unsigned integer values, regardless of any compression scheme used.
Fundamental to treating a long line of data values as a 2-D image or an N-dimensional volume or hyper-volume is the knowledge of the manner in which the values need to be wrapped. For the raster orientation to be meaningful we define the sense of the view:
For a detector image the sense of the view is defined as that looking from the crystal towards the detector.
(For the present we consider only an equatorial plane geometry, with 2-theta = 0; the detector as being vertically mounted.)
The rastering is defined by the three data names _array_structure_list.index, _array_structure_list.precedence, and _array_structure_list.direction data names.
index refers to the dimension index i.e. In an image 1 refers to the X-direction (horizontal), 2 refers to the Y-direction (vertical).
precedence refers to the order in which the data in wrapped.
direction refers the direction of the rastering for that index.
We define a preferred rastering orientation, which is the default if the keyword is not defined. This is with the start in the upper-left-hand corner and the fastest changing direction for the rastering horizontally, and the slower change from top to bottom.
(Note: With off-line scanners the rastering type depending on which way round the imaging plate or film is entered into the scanner. Care may need to be taken to make this consistent.)
# Define image size and rastering loop_ _array_structure_list.array_id _array_structure_list.index _array_structure_list.dimension _array_structure_list.precedence _array_structure_list.direction image_1 1 1300 1 increasing image_1 2 1200 2 decreasingTo define two arrays, the first a volume of 100 times 100 times 50 elements, fastest changing in the first dimension, from left to right, changing from bottom to top in the second dimension, and slowest changing in the third dimension from front to back; the second an image of 1024 times 1280 pixels, with the second dimension changing fastest from top to bottom, and the first dimension changing slower from left to right; the following header section might be used:
# Define array sizes and rasterings loop_ _ARRAY_STRUCTURE_LIST.ARRAY_ID _ARRAY_STRUCTURE_LIST.INDEX _ARRAY_STRUCTURE_LIST.DIMENSION _array_structure.precedence _array_structure.direction volume_a 1 100 1 increasing volume_a 2 100 2 increasing volume_a 3 50 3 increasing slice_1 1 1024 2 increasing slice_1 2 1280 1 decreasing
Existing data storage formats use a wide variety of methods for storing physical intensities as element values. The simplest is a linear relationship, but square root and logarithm scaling methods have attractions and are used. Additionally some formats use a lower dynamic range to store the vast majority of element values, and use some other mechanism to store the elements which over-flow this limited dynamic range. The problem of limited dynamic range storage is solved by the data compression methods byte_offsets and predictor_huffman (see next Section), but the possibility of defining non-linear scaling must also be provided.
The _array_intensities.linearity data item specifies how the intensity scaling is defined. Apart from linear scaling, which is specified by the value linear, two other methods are available to specify the scaling.
One is to refer to the detector system, and then knowledge of the manufacturers method will either be known or not by a program. This has the advantage that any system can be easily accommodated, but requires external knowledge of the scaling system.
The recommended alternative is to define a number of standard intensity linearity scaling methods, with additional data items when needed. A number of standard methods are defined by _array_intensities.linearity values: offset, scaling_offset, sqrt_scaled, and logarithmic_scaled. The "offset" methods require the data item _array_intensities.offset to be defined, and the "scaling" methods require the data item _array_intensities.scaling to be defined. The above scaling methods allow the element values to be converted to a linear scale, but do not necessarily relate the linear intensities to physical units. When appropriate the data item _array_intensities.gain can be defined. Dividing the linearized intensities by the value of _array_intensities.gain should produce counts. Two special optional data flag values may be defined which both refer to the values of the "raw" stored intensities in the file (after decompression if necessary), and not to the linearized scaled values. _array_intensities.undefined_value specifies a value which indicates that the element value is not known. This may be due to data missing e.g. a circular image stored in a square array, or where the data values are flagged as missing e.g. behind a beam-stop. _array_intensities.overload_value indicates the intensity value at which and above, values are considered unreliable. This is usually due to saturation.
# Define image intensity scaling loop_ _array_intensities.array_id _array_intensities.linearity _array_intensities.gain _array_intensities.undefined_value _array_intensities.overload_value image_1 linear 1.2 0 65535
In Version 1 two types of lossless data compression algorithms are defined. In later versions other types including lossy algorithms may be added.
The first algorithm is referred to as byte_offsets and has been chosen for the following characteristics: it is very simple, may be easily implemented, and can easily lead to faster reading and writing to hard disk as the arithmetic complication is very small. This algorithm can never achieve better than a factor of two compression relative to 16-bit raw data, but for most diffraction data the compression will indeed be very close to a factor 2.
The second algorithm is referred to as predictor_huffman and has been chosen as it can achieve close to optimum compression on typical diffraction patterns, with a relatively fast algorithm, whilst avoiding patent problems and licensing fees. This will typically provide a compression ratio between 2.5 and 3 on well exposed diffraction images, and will achieve greater ratios on more weakly exposed data e.g. 4 - 5 on "thin phi-slicing" images. Normally, this would be a two pass algorithm; 1st pass to define symbol probabilities; second pass to entropy encode the data symbols. However, the Huffman algorithm makes it possible to use a fixed table of symbol codes, so faster single pass compression may be implemented with a small loss in compression ratio. With very fast cpus this approach may provide faster hard disk reading and writing than the 'byte_offsets" algorithm owing to the smaller amounts of data to be stored.
There are practical disadvantages to data compression: the value of a particular element cannot be obtained without calculating the values of all previous elements, and there is no simple relationship between element position and stored bytes. If generally the whole array is required this disadvantage does not apply. These disadvantages can be reduced by compressing separately different regions of the arrays, which is an approach available in TIFF, but this adds to the complexity reading and writing images.
For simple predictor algorithms such as the byte_offsets algorithm a simple alternative is an optional data item, which defines a look-up table of element addresses, values, and byte positions within the compressed data, and it is suggested that this approach is followed.
The algorithm works because of the following property of almost all diffraction data and much other image data: The value of one element tends to be close to the value of the adjacent elements, and the vast majority of the differences use little of the full dynamic range. However, noise in experimental data means that run-length encoding is not useful (unless the image is separated into different bit-planes). If a variable length code is used to store the differences, with the number of bits used being inversely proportional to the probability of occurrence, then compression ratios of 2.5 to 3.0 may be achieved. However, the optimum encoding becomes dependent of the exact properties of the image, and in particular on the noise. Here a lower compression ratio is achieved, but the resulting algorithm is much simpler and more robust.
The byte_offsets algorithm is the following:
It may be noted that one element value may require up to 7 bytes for storage, however for almost all 16-bit experimental data the vast majority of element values will be within +-127 units of the previous element and so only require 1 byte for storage and a compression factor of close to 2 is achieved.
9.0 REFERENCES
1. S R Hall, F H Allen, and I D Brown, "The Crystallographic Information
File (CIF): a New Standard Archive File for Crystallography",
Acta Cryst., A47, 655-685 (1991)
(1) A pure CIF based format has been considered inappropriate given the
enormous size of many raw experimental data-sets and the desire for
efficient storage, and reading and writing.
(2) Some simple method of checking whether the file is a CBF or not is
needed. Ideally this would be right at the start of the file. Thus, a
program only needs to read in n bytes and should then know immediately
if the file is of the right type or not. I think this identifier should
be some straightforward and clear ASCII string.
The underscore character has been used to avoid any ambiguity in the
spaces.
(Such an identifier should be long enough that it is highly unlikely to
occur randomly, and if it is ASCII text, should be very slightly
obscure, again to reduce the chances that it is found accidently. Hence
I added the three hashes, but some other form may be equally valid.)
(3) The format should maintain backward compatibility e.g. a version 1.0
file can be read in by a version 1.1, 3.0, etc. program, but to allow
future extensions the reverse cannot be guaranteed to be true.
This page has been produced by Andy Hammersley (E-mail: hammersley@esrf.fr).
Further modification is highly likely.
EXAMPLE CBF