[discuss] RFC: Support IEEE 754R draft in the x86-64 psABI
MFC at uk.ibm.com
Thu May 11 12:59:34 CEST 2006
"Cornea, Marius" <marius.cornea at intel.com> wrote on 10/05/2006 23:49:13:
> >> Using binary was a great idea
> >> in 1946, when minimizing the number of components was paramount for
> >> several reasons (mostly MTBF). Those reasons are barely relevant
> >> today.
> Mike, maybe we are talking about different things indeed.
> We are not concerned at all with the decNumber capabilities or its
> implementation. We ran a comparison between our implementation and
> decNumber because decNumber is the only other 754R decimal
> implementation, and because it is going to be used by GCC for decimal
> floating-point arithmetic.
OK, let me try and explain why this comparison isn't valid. Let's suppose
we have three decimal arithmetic 'engines'. One that does decimal
arithmetic using a binary significand (your BID code, for example), one
that uses a decimal significand (DPD hardware for example), and one that
uses hybrid arithmetic (decNumber, for example). Each has its
corresponding native/internal data format.
Take any one of these. The format that will be fastest with the chosen
engine will (of course) be the one that uses the engine's internal format,
because in that case no conversion needs to be done. By definition, the
other two formats *must* be slower with that engine as they will need
Equally, given a choice of engine, the worst-case performance penalty for
the other two formats is the cost of converting the format to the engine's
format, doing the calculation on the engine, and converting back to the
original format. (In practice it is much less than that because one can
stay in the engine's format until the result is forced to be represented
on disk on in accessible memory.)
So, if you choose decNumber as the engine then the comparison between DPD
and BID must be the comparison between the conversions costs of DPD and
BID to and from the decNumber format. This factors out the cost of the
Similarly, if you choose BID as the engine, then the worst performance you
could show for DPD would be the cost of converting DPD to BID doing the
calculation in BID, and then converting back again.
Your comparison, however, compares a BID format on a BID engine with DPD
on a decNumber engine. The latter requires conversion of DPD to the
hybrid format, then carrying out unbounded-input arbitrary precision
arithmetic on the intermediate result, and then conversion back to DPD --
so of course it is slower. You are guaranteed to win that comparison, so
the comparison is meaningless; it conveys no information.
Equally, if the engine is a DPD one (such as on our forthcoming hardware),
then using BID or a hybrid format would of course be slower on that engine
than using the DPD format. That comparison, too, tells us nothing.
> We are just talking about choosing an encoding format (binary) for
> decimal floating-point values for the x86-64 psABI. With conversions,
> the other encoding (decimal) could still be used. Because x86-64 still
> uses binary hardware (60 years after 1946, so I must disagree at least
> in part with your statement above on using binary :-)) we believe that
> it will be more efficient overall to have a software implementation of
> the 754R decimal arithmetic on x86-64 that is based on the binary
> encoding of decimal values.
That makes the huge assumption that there will never be any hardware
support for the decimal formats on the X86 platform. Suppose there was
the obvious minimal support: instructions which simply took the DPD
formats and converted them to the binary registers -- the arithmetic is
then done in software just as you do now. With that support, the cost of
conversions DPD<-->binary really is no longer an issue, and the software
could probably be a little faster than BID.
> The decimal encoding seems artificial in
> this context (I understand your justification about the few gates used
> for conversion, but that does not apply to x86-64), and it is
> software-unfriendly. To carry out decimal floating-point operations on
> x86-64 we would have to convert the decimal values to a binary
> representation anyway, and this is much easier when using the 754R
> binary encoding - actually the binary values are there already in many
> cases (and for the 128-bit format, in all cases).
(Again, that assumes there is no decimal support in X86, which is, I hope,
just a temporary situation :-).)
In short, choosing a format based on what's best for a given platform at a
given point of time is a poor strategy. Choosing a format that is
practical for all platforms (but maybe isn't optimal for some platform
today, but could be tomorrow) makes more sense. The IEEE 754r committee
invented a format which used decimal encoding and which existed on no
platform, and Intel voted to adopt that (as did IBM and all the others
present). That was the right decision, because decimal-encoded formats
are the formats which are best matched to decimal data -- and decimal data
More information about the discuss