[discuss] RFC: Support IEEE 754R draft in the x86-64 psABI
MFC at uk.ibm.com
Sat May 13 20:21:40 CEST 2006
"H. J. Lu" <hjl at lucon.org> wrote on 13/05/2006 17:54:19:
> On Sat, May 13, 2006 at 09:45:55AM +0100, Mike Cowlishaw wrote:
> > What I worry about is that if I write to a file with one compiler I'll
> > get one format encoding and if I use a different compiler I might get
> > different encoding appearing in the file. Files (lots of them)
> > exist on X86 (and other platforms) with the DPD encoding -- a compiler
> > that insisted on reading or writing in some different format such as
> > would be unusable with those data.
> It is not new today. For floating point, different platforms may use
> different formats.
That was, indeed, true in 1980. That's why the IEEE 754 committee and
others recognized that standardization was needed. Since IEEE 754-1985
approved there have been enormous benefits from that standard: with a
format everyone knows what to expect from (say) a 64-bit double floating-
point type. You'll find that same type in everything from game machines,
laptops, desktops, servers, and super-computers.
> When you read a binary file with floating point
> number created on another platform, you may have to do conversion.
The only conversion I expect to have to do for binary floating-point
nowadays, is endian-reversal. It's unfortunate that is needed, but it is
nothing to do with floating-point: you can do the reversal regardless of
whether the data item is an integer, a binary floating-point number, or a
> on the same platform, the format of floating point may change. When you
> read a binary file with floating point number created a year ago on the
> same machine, you may have to do conversion too.
That's news to me. Do you have some examples?
> My point is reading a
> binary file may require conversion. We should provide library functions
> to deal with it.
But why repeat the mistakes of the 1970s when this time we know how
they are? Why must every compiler and library writer provide two sets of
functions when one will do? Why must every programmer have to code dual
The binary-integer format has been proposed because it is 'software-
friendly'. While that claim is debatable (it is sorely unfriendly to
most existing decimal software, and incompatible with most existing
data), I think Peter Tang summarized the truth very well in his paper
distributed to the IEEE 754r committee last year ('On Generalized BCD
Encodings for Decimal Floating Point', March 7, 2005):
"Our focus is on hardware algorithms as software algorithms can be
flexible and do not raise any real concern."
He was quite right; from a software point of view it's no big deal. One
encoding is sufficient, and enormously better than two. Introducing a new
encoding at this late date just wastes everyone's time and money.
More information about the discuss