Why is it possible to collect only 255 rubies in "The Legend of Zelda" for the NES?

I mean, normally, I would know why this is the case: 0-255 = one byte.

But the rubies are displayed as a decimal number. So, didn't they store the value in a three byte array with one byte representing one decimal digit, like they did with the score in "Super Mario Bros."?

Does "Zelda" really have a "binary number to decimal output" function in the program code? Because if not, why don't the rubies max out at 999 instead?

I don't know what it's doing internally but 255 is usually a dead giveaway of using a byte to store the value regardless of how it's being output on screen =P

As this was Nintendo's first game with save files, keeping the game's stored state simplified would have been advantageous.

The linear usage of RAM, starting from $0656, supports this idea.

Rupie count is stored in a single byte (range 0-255) in RAM location $066d. The code turns the decimal value of that byte into a "3-tile sequence". The raw value is literal and not BCD (i.e. a value of $12 will show up as 18). Effectively it's something like this -- it's been a long, long time since I've looked at the code:

Tile/CHR offsets $00 through $09 are literally `0` through `9`

Tile/CHR offset $21 is an `X` (indicating "number of" or "count of"); it's the same tile that's used for "X" in game text

Tile/CHR offset $24 is a blank/empty tile

If the rupie count is between 0 and 9 (1 digit), it displays `X` followed the 1-digit value, followed by an empty/blank tile. I.e. a rupie count of 3 would render as $210324.

If the rupie count is between 10 and 99 (2 digits), it displays `X` followed by the 2-digit value. I.e. a rupie count of 49 would render as $210409.

Otherwise, it prints the rupie count as a 3-digit number.

It might also optimise out needing to display tile $24 by simply filling that area of the nametable with $24 by default, then only drawing the ones it needs to. Same end result either way.

And honestly, that's exactly how I'd do it -- only "splitting" the number into individual digits in the routine that needed to draw the proper on-screen tiles to reflect the value/number. The `X` stuff is a nice cosmetic touch that players don't get hung up on when playing; I never really noticed it until now, actually.

Footnote: bombs (RAM $0658) and keys (RAM $066e) are printed the same way (yup -- you can have up to 255 of those too! The limiting factor is mainly in the game mechanics (how/when you find maximum bomb increases, etc.)). The one exception is the master key, where after you pick that up, the game sets some flag (not sure of RAM location) that essentially ignores the RAM $066e value and instead just prints `XA{blank}` (tiles $210a24).

Aligning numbers left is indeed very elegant, and this prohibes working data in BCD in memory. However, other games commonly uses a variant of BCD to store their data, for instance Castlevania uses entierely BCD, despite the NES lacking BCD mode. Other games simply stores digits in the low nybble and waste the high nybble. After all, the NES has 2k of RAM, one byte more or less will rarely make the difference.

Bregalad wrote:

Aligning numbers left is indeed very elegant, and this prohibes working data in BCD in memory.

How does digit-by-digit storage prohibit left alignment? Just do the equivalent of this before you copy the text into the output buffer:

**Code:**

while (i < NUM_DIGITS - 1 && digits[i] == 0) {

++i;

}

**Quote:**

Other games simply stores digits in the low nybble and waste the high nybble. After all, the NES has 2k of RAM, one byte more or less will rarely make the difference.

Unless it's an RPG or strategy game that has a

*lot* of stats that need to be tracked and displayed.

Among games I've programmed:

*Concentration Room* and "ZapPing" in *Zap Ruder* use 1-byte binary values for the players' scores and convert them to decimal on display*Thwaite* stores the score as 3-byte "base 100", where each byte represents two digits from 0 to 99 ($00 to $63), and the ammo stocks are 1-byte binary, all converted to decimal on display*RHDE* uses 2-byte values from 0 to 65535 for house scores and money amounts, converted to decimal on display*Haunted: Halloween '85* stores the kill count as a 2-byte value from 0 to (theoretically) 65535, converted to decimal on display

Thanks for the replies. So, "Zelda" does convert the number for display. Interesting.

In my own game, there's only the score that needs to be displayed and a timer in certain situations.

Lives cannot go higher than 5. (Which is a design choice. I could increase them to 9 without altering the rest of the code.)

And energy is not shown as a number, but as individual units.

I save these values as one byte per digit since there's still plenty of room in RAM.

And each thing that gets you points can only have one non-zero digit. I.e. you can get 20 or 200 points for one thing, but not 250. Which makes the addition function quite simple.

But I agree that an RPG definitely profits from a conversion function. If you have dozens of statuses that can go from 0 to 50000, it definitely makes a difference whether you have a two bytes unsigned int or a five bytes array per status.

Converting small binary numbers to decimal is not hard, the simplest way to do it is repeated subtraction.

Let's say you have a 16-bit unsigned int (0-65535).

Count how many times you can subtract 10000, 1000, 100, 10, 1. That's your decimal number right there.

The problem isn't that, the problem is doing it quickly.

How quickly does it need to be? You only need to convert values during those frames when you're updating the status bar. The routine I used in

*Concentration Room*,

*Thwaite*, and

*Zap Ruder* converts 8 bits to 3 digits in 80 cycles, and

the routine in *RHDE* and *HH85* converts 16 bits to 5 digits in about 670 cycles. Other faster routines have been posted.

Yeah usually you update like once every many frames =/ I imagine that most games actually just do some rather dumb decimal-to-BCD conversion and move on.

Sik wrote:

The problem isn't that, the problem is doing it quickly.

What's the fast method for doing this on the NES?

I think having a 256 LUT, which outputs 0 to 99. And for the last digit in the 100s place, simply check of the original byte is greater than 200 (populate a 2) or greater than 100 (populate a 1). Since NES doesn't have decimal mode like other 65x, you can't easily cascade two more bytes in this process unfortunately.

This is what

*Thwaite* and

*Concentration Room* use. For 69, it produces $0000 = $06 and A = $09. For 246, it produces $0000 = $24 and A = $06.

**Code:**

; 8-bit binary to decimal converter

; copyright 2010 Damian Yerrick

; License: WTFPL http://www.wtfpl.net/

.macro bcd8bit_iter value

.local skip

cmp value

bcc skip

sbc value

skip:

rol highDigits

.endmacro

;;

; Converts a decimal number to two or three BCD digits

; in no more than 84 cycles.

; @param A the number to change

; @return A: low digit; $0000: upper digits as nibbles

.proc bcd8bit

highDigits = 0

asl highDigits

asl highDigits

; Each iteration takes 11 if subtraction occurs or 10 if not.

; But if 80 is subtracted, 40 and 20 aren't, and if 200 is

; subtracted, 80 is not, and at least one of 40 and 20 is not.

; So this part takes up to 6*11-2 cycles.

bcd8bit_iter #200

bcd8bit_iter #100

bcd8bit_iter #80

bcd8bit_iter #40

bcd8bit_iter #20

bcd8bit_iter #10

rts

.endproc

And it uses the "base 100" trick because of this lack of cascading.

**Code:**

;;

; Adds between 1 and 255 points to the score.

; X, Y, and memory (apart from score) are unchanged.

.proc addScore

clc

adc score1s

bcc notOver256

inc score100s

inc score100s

adc #55

notOver256:

cmp #100

bcc notOver100

sbc #100

inc score100s

bcs notOver256

notOver100:

sta score1s

lda bgDirty

ora #BG_DIRTY_STATUS

sta bgDirty

rts

.endproc

tomaitheous wrote:

I think having a 256 LUT, which outputs 0 to 99.

You only need 100 entries for that ;P

But yeah that's fast and feasible for homebrew, back in the day it wasn't that feasible with tiny PRG-ROMs though where every byte mattered, hence the slower methods. (well, and not every programmer being clever enough or having enough time to come up with a good mehtod)

Sik wrote:

You only need 100 entries for that ;P

Yes, just handle the hundreds digit first and whatever's left is guaranteed to be 99 or less.

Sik wrote:

tomaitheous wrote:

I think having a 256 LUT, which outputs 0 to 99.

You only need 100 entries for that ;P

Well, you did mention speed...

Up the 256 to 512bytes..

**Code:**

ByteBin2Dec:

lda TableDec,x

ldy SwapLowNybble2BCD,x

ldx TableDec,y

rts

Input is byte in X. Returns BCD in two bytes via X:A.

Edit: I guess it doesn't make any sense to have a whole 'nother 256 LUT for high nybble shift and extraction, when it can just convert it directly. So just TableDec and TableDecUpper. I also thought you could simply cascade afterwards, for a larger binary conversion using this method, but I just realized that's not going to work. Meh...

tomaitheous wrote:

Well, you did mention speed...

Checking for 100 or 200 is nothing compared to doing a division by 10 to split the tens from the units.

If you understand "checking for 200", then you can understand my routine as applying the same concept to 200, 100, 80, 40, 20, and 10.

tomaitheous wrote:

Sik wrote:

The problem isn't that, the problem is doing it quickly.

What's the fast method for doing this on the NES?

I think having a 256 LUT, which outputs 0 to 99. And for the last digit in the 100s place, simply check of the original byte is greater than 200 (populate a 2) or greater than 100 (populate a 1). Since NES doesn't have decimal mode like other 65x, you can't easily cascade two more bytes in this process unfortunately.

Here are some quick Hex to Decimal methods I have wrote.

viewtopic.php?p=130363#p130363And a quick summary of the routines byte usage and cycles. The cycles include the JSR/RTS.:

**Code:**

;slow routine - 174 bytes, 183 bytes with HexToDec255 and HexToDec999

;HexToDec99 ; 37 cycles

;HexToDec255 ; 52-57 cycles

;HexToDec999 ; 72-77 cycles

;HexToDec65535 ; 178-186 cycles

;Fast routine - 234 bytes, 243 bytes with HexToDec255 and HexToDec999

;HexToDec99 ; 37 cycles

;HexToDec255 ; 52-57 cycles

;HexToDec999 ; 72-77 cycles

;HexToDec65535 ; 157-162 cycles

;-------------------------------------------------------------------------------

;HexToDec99

; start in A

; end with A = 10's, decOnes

;HexToDec255

; start in A

; end with Y = 100's, A = 10's, decOnes

;HexToDec999

; start with A = high byte, X = low byte

; end with Y = 100's, A = 10's, decOnes

; requires 1 extra temp register on top of decOnes, could combine

; these two if HexToDec65535 was eliminiated...

;HexToDec65535

; start with A = high byte, X = low byte

; end with decTenThousand, decThousand, Y = 100's, A = 10's, decOnes

; requires 2 extra temp registers on top of decTenThousand, decThousand, decOnes

Sik wrote:

The problem isn't that, the problem is doing it quickly.

I actually wanted to do this once with 16-bit vars, and here is the code I came up with

viewtopic.php?f=2&t=13816 if that's what you're asking for

Off topic, but the original post reminded me of a question I had about Baskin-Robbins: Why 31 flavors instead of 32? But, it turns out to have nothing to do with base-2 math. Rather, it's a flavor for everyday of the month.

Edit: Also

this.