It's my understanding that there are only two recognized TIF formats, one for 24-bit data and one for 48-bit data. So a scanner that outputs 36-bit data is going to actually give you a 48-bit TIF file, since the data won't fit into the 24-bit format. It would be nice if a standard existed for 36-bit TIF files since your scanner's 36-bit data would fit perfectly into such a file and the resulting file would take up less room in memory, or on disc, than does a 48-bit file. But if you create a new standard TIF format for 36-bit data, then someone else is going to want one for 42-bit data, or 64-bit data, etc., and the number of formats would never end. My comments should be in agreement with the other comments you've received.
In a message dated 10/27/2001 7:37:26 AM Pacific Daylight Time, firstname.lastname@example.org writes:
The specifications from my Microtek Artixscan 4000t (the same as the
Polaroid Sprintscan 4000 in another box, except for the firmware) state a 36
bits depth. The Microtek scanning software (ScanWizard Pro) allows the
choice between RGB 24 and RGB 48 bits. It seems that the maximum resulting
"quality" is 12 bits by channel and the sofware only translates to 16 bits
by channel, allowing compatibility with programs such as Photoshop. I
suppose that the white point is converted from step 2^12 in the 12 bits
scale to 2^16 in the 16 bit scale and the intermediary steps are just
interpolated (without any gain in the image quality). Am I right?