This page is a mirror of Tepples' nesdev forum mirror (URL TBD).

# I got obsessed with thinking about hex to decimal algorithms

I was trying to make a Super Mario World optimization patch (again), and the HEX to DEC conversion code was one of the routines I reprogrammed. Super Mario World uses a 6 digit score values (7 including the lowest digit that is always 0) and the score in binary/hex takes up 3 bytes. The routine I wrote, divided the value by 100 twice, to get out 3 bytes storing 2 decimal digits each, then used a LUT to split the 2 digit byte value into 2 decimal digits.

It made me wonder what kind of fancy math was used in NES games or could be used in an NES game. Every byte in a score counter effects every digit.
Some NES games such as RHDE store large values as binary and convert upon display. There are efficient methods for this, such as these by Omegamatrix, though it's not something you want to do several times every frame.
Some NES games such as Super Mario Bros. store large values with one byte per digit.
Some NES games store large values as BCD, where \$33 means thirty-three, not fifty-one, and implement BCD addition in software. This is easier on Super NES because decimal mode wasn't cut out of the S-CPU.
Some NES games such as Thwaite store large values as base 100, where \$0C \$22 means 1234 because the bytes mean twelve and thirty-four respectively, and convert each byte upon display. This is faster than converting an entire 16- or 24-bit number, as it can be done with a reasonably sized lookup table or even without a table in 80 cycles per byte.

If I were doing something with large floating point values, such as an idle game, I'd probably use base 100.
Of the 3 snippets on the first page, 2 will not work in an NES game. Likewise both of the snippets on the second page. Why? Because I sed so.

(Why for real? The sed instruction on a 2A03 does not do what those snippets expect it to do.)
try http://codebase64.org/doku.php?id=base:6502_6510_maths

However are you optimizing for speed or RAM? Not really sure the 65816 has anything to offer over a 6502..
There is also my method (which is used for 16-bit numbers). You need the first twenty patterns in the pattern table to be numbers 0 to 9 twice, and you also need lookup tables. And then, each digit (from right to left) will do like:
Code:
LDA lo_table,X
CMP #10
The ADC of each digit is then may add one due to the CMP from the previous digit.
zzo38 wrote:
Code:
LDA lo_table,X
CMP #10

What's in lo_table, hi_table, X and Y?
qalle wrote:
zzo38 wrote:
Code:
LDA lo_table,X
CMP #10

What's in lo_table, hi_table, X and Y?

In X is the low byte, Y is the high byte, lo_table and hi_table are tables of digits. Different tables are used for each digit position. For example the ones digit (rightmost digit) will have lo_table repeating 0 to 9, while hi_table for the ones digit will be repeating 0 6 2 8 4 (since it represents the digits of the index multiplied by 256).

You will have to replace lo_table and hi_table with the correct tables for each digit. Note also the pattern tables must be set up as I specified, in order for this to work.
Oziphantom wrote:
Not really sure the 65816 has anything to offer over a 6502..

True, but for this specific purpose, both the 65816 and an authentic 6502 have something to offer over the slightly cut-down 6502 in the 2A03: the ALU reacts to sed.
tepples wrote:
Oziphantom wrote:
Not really sure the 65816 has anything to offer over a 6502..

True, but for this specific purpose, both the 65816 and an authentic 6502 have something to offer over the slightly cut-down 6502 in the 2A03: the ALU reacts to sed.

I didn't comment on this earlier because I figured it was regarding the specific piece of code. However, my 65816 Forth runs two to three times as fast as my 65c02 Forth at a given clock speed, and the code is more compact. Further, when you're always dealing with 16-bit quantities, I find the '816 to be much easier to program than the '02 is. The '816 can also do things the '02 is either clumsy at or downright incapable of.