This page is a mirror of Tepples' nesdev forum mirror (URL TBD).
Last updated on Oct-18-2019 Download

Measuring the time that a function takes

Measuring the time that a function takes
by on (#158185)
Is there a way to measure the time that a certain function takes to process? I'm thinking of something that is immediately visible on the screen, so I don't have to stop the game and check some stuff in the binary code of the emulator's PPU or something.

For example, is there a way to read the current pixel position that the cathode ray is in the moment? This way I could read that value, store it into a variable and then use a few debug sprites to display that value on the screen.

Or something else to manipulate the screen output so that I see what the cathode ray is drawing in the moment.
Re: Measuring the time that a function takes
by on (#158186)
Writes to $2001 (PPUMASK) take effect immediately. Turning on greyscale + a colour emphasis can be a great way to mark on the screen when a particular thing happened. This is what I do in my own project.

(Greyscale in conjunction with emphasis is important because it replaces black with grey, so you don't lose the ability to see the mark against black.)
Re: Measuring the time that a function takes
by on (#158187)
Alternatively, if you're using an emulator, in FCEUX you can create LUA scripts that would measure timing information and overlay it on the screen for you. I often use this when debugging to visualize hitboxes, etc.
Re: Measuring the time that a function takes
by on (#158188)
Thanks. Writing to PPUMASK seems to be the best way for me.
Writing a LUA script would mean that I first have to read how this works.
Re: Measuring the time that a function takes
by on (#158189)
Thefox's NintendulatorDX variant adds a set of registers at $402X/$403X for timing:
NintendulatorDX readme wrote:
Cycle Counting Timers
---------------------
16 CPU cycle counting timers are available for timing different parts of your code. The timers can only be used by writing to a register at the moment. They work by writing to register $402x when you want the timing to start, and to $403x when you want it to end. "x" is the timer number.

E.g.
Code:
  ; Start timer #5. The written value doesn't matter.
  sta $4025
  ; Do whatever.
  nop
  lda #123
  ; Stop timer #5. After this the Debugger window will display the
  ; number of CPU cycles taken by the above code block (4 cycles).
  sta $4035

The number of cycles taken by the STA instructions do not count into the reported number of cycles!

By defining a specially named symbol it's possible to define a name for the timer. The symbol name should be "__timerN_title", and it should point to a zero terminated string containing the name of the timer (replace N with the timer number). This name is displayed in the Debugger window.

E.g.

Code:
  .code
    __timer5_title:
      .asciiz "nmi routine"

The following macros may be useful:
Code:
  .macro startTimer timer
    sta $4020 + timer
  .endmacro

  .macro stopTimer timer
    sta $4030 + timer
  .endmacro

  .macro timerTitle timer, title
    .ident( .sprintf( "__timer%d_title", timer ) ):
      .asciiz title
  .endmacro

Known problems:
* The address of the registers ($402x/$403x) overlaps the FDS register area.

See also:
* [Lua] NDX.getCPUCycles()
Re: Measuring the time that a function takes
by on (#158190)
I would simply set grayscale + emphasis in the beginning of the function, and return to normal in the end. This should give a very clear visual indication of how much of the frame's time is being spent on the function. You could even use different emphasis configurations if you wanted to time different functions.
Re: Measuring the time that a function takes
by on (#158196)
More methods to measure execution time:
  1. Setup up a mapper's interval timer for some estimate of your subroutine's duration, run the subroutine, and then count cycles until the IRQ triggers.
  2. Do the above using APU DMC. Play a 1-byte sample at maximum rate through the DMC, wait for the IRQ, play the sample at a lower rate, call the subroutine, and then count cycles until a second IRQ triggers.
  3. Make your program buildable for both the NES and Super NES. The latter has a readable H/V counter.
  4. Set up your subroutine's preconditions, then run it in a 6502 simulator that counts cycles.

But for me, the easiest way is to open the debugger of FCEUX for Windows, set a breakpoint at the start of the subroutine, run the program until the subroutine starts, click "Step Out", and then check the cycle counter. It'll be of the form "CPU cycles: xxxxxxxxx (+yyyy)"; the yyyy is how many cycles the subroutine takes. This breaks down, however, if the subroutine runs once a frame and has an execution time varies based the player's actions in the last few frames, because it's so hard to send input to a program where a breakpoint triggers every frame. Is this the problem you're running into?
Re: Measuring the time that a function takes
by on (#158201)
Isn't this all a huge overkill? To me, writing to PPUMASK seems to be the most straightforward and easy to implement thing. And it gives me a pretty accurate, easy to see at first glance output

About the FCEUX debugger: That's what I wanted to avoid. Because I want to see how much time everything takes during regular gameplay.
So, yes, actions that take different time every frame might be an issue. In the moment, I don't have a specific problem. I just want to have a general way to measure the time. Especially since I have to split the game logic in three parts since I have the status bar and the parallax split. So, I want to see what I can push into the smaller parts.
Re: Measuring the time that a function takes
by on (#158202)
It sounds to me like the best thing for what you need is to use the cycle counting timers with NintendulatorDX.

This is what I do when I need to gauge the length in cycles of a routine, the entire program, the vBlank handler, or essentially any part of the program that can be marked with a start and stop position.

With this you get an average of what the total has been while your program has been running, which sounds like what you want.
Re: Measuring the time that a function takes
by on (#158203)
Often I just try to count the cycles manually. This won't work if it is too complicated though.
Re: Measuring the time that a function takes
by on (#158204)
DRW codes in C, so counting cycles manually isn't an option.
Re: Measuring the time that a function takes
by on (#158205)
What's he coding in C for if he's concerned about cycle count? :lol:
Re: Measuring the time that a function takes
by on (#158214)
Espozo wrote:
What's he coding in C for if he's concerned about cycle count? :lol:

Performance always matters when your CPU time is limited, and I guess this surprises you, but it actually even matters more when using C.

Using C is like trying to work with a CPU with 1/10th the power. You probably have a LOT more need to monitor performance to check on your budget than when doing things an assembly. :P

It's also very easy to write code that looks simple in C, but accidentally generates horribly slow code. This is yet another reason why you need to check frequently, because otherwise it's hard to know when you wrote something that destroyed your timing budget.
Re: Measuring the time that a function takes
by on (#158222)
Could Espozo have meant that DRW shouldn't be using C if he's concerned about the cycle count?
Re: Measuring the time that a function takes
by on (#158224)
Of course that's what Espozo meant. Espozo's also made it clear in the past that he doesn't understand the point of using any higher-level language, so it's not clear that his point is useful.
Re: Measuring the time that a function takes
by on (#158226)
I was trying to answer the question as if it wasn't hostile, for its own benefit. I don't really care if he was sincerely asking or not.
Re: Measuring the time that a function takes
by on (#158227)
tokumaru wrote:
Could Espozo have meant that DRW shouldn't be using C if he's concerned about the cycle count?

I think that's what he meant. (jokingly)

And I believe rainwarrior's point is that C isn't as efficient so he has to be more conscious of his cycle counts.

The ultimate question is whether or not the specific game design sought will function in C. If it's not demanding enough to require assembly optimization, and produces the same end product, and it's easier/faster to make the game in C, then it's a win-win.

Perhaps, if he wants to avoid as much assembly coding as possible, checking cycle counts for routines would give him an idea of where he needs to optimize in ASM.
Re: Measuring the time that a function takes
by on (#158241)
darryl.revok wrote:
I think that's what he meant. (jokingly)

That's what I meant.

lidnariq wrote:
Espozo's also made it clear in the past that he doesn't understand the point of using any higher-level language, so it's not clear that his point is useful.

How does that make my point any less useful? Is it that I've never programed in a higher level language so I don't know how slow it really is compared to assembly? I've said this before and I'll say it again: If a dumb kid like me can do a somewhat decent job programming in assembly, I don't see why people find it that difficult. That is, unless it isn't that this is difficult, it's just that C is about 50x easier, which I doubt. (I imagine the core concept is the same) I just hate all the naysayers that say it would be "virtually impossible" to make a modern game in assembly. It would be a hell of a challenge, but impossible? Granted, most of the people I know who have made claims like this have never even programmed in assembly or know how to, so...

darryl.revok wrote:
Perhaps, if he wants to avoid as much assembly coding as possible, checking cycle counts for routines would give him an idea of where he needs to optimize in ASM.

I think I remember hearing that trying to fix converted C code for the NES is just about as difficult as writing in assembly from scratch.

rainwarrior wrote:
I was trying to answer the question as if it wasn't hostile, for its own benefit.

I can handle myself. :wink: If I were that concerned about what people thought of stupid comments like that, I wouldn't have posted them in the first place. :lol:
Re: Measuring the time that a function takes
by on (#158247)
Espozo wrote:
How does that make my point any less useful? Is it that I've never programed in a higher level language so I don't know how slow it really is compared to assembly? I've said this before and I'll say it again: If a dumb kid like me can do a somewhat decent job programming in assembly, I don't see why people find it that difficult. That is, unless it isn't that this is difficult, it's just that C is about 50x easier, which I doubt. (I imagine the core concept is the same) I just hate all the naysayers that say it would be "virtually impossible" to make a modern game in assembly. It would be a hell of a challenge, but impossible? Granted, most of the people I know who have made claims like this have never even programmed in assembly or know how to, so...


Arguing about what is possible is pointless. It's possible to write a modern game with a hex editor if you really want to (or in commercial development terms: how much it would cost). The question is why you'd want to. On the NES the reasons for using assembly are very clear (you need more performance and/or smaller code than the available compilers can provide). On a modern platform there would really be no advantage to an all-assembly approach. There is often an advantage to writing some parts of your program in assembly, and lots of modern games do.

On the NES, the largest project I've written in C was a musical project that took me about one day to write. I would estimate that the same task would have taken me at least a week in assembly.

A very simple example of why C is quicker to both read and write:
Code:
// C
tx = tx + vx;
if (tx < -500) tx = -500;

; 6502 assembly
lda tx+0
clc
adc vx+0
sta tx+0
lda tx+1
adc vx+1
sta tx+1
lda tx+0
cmp #>-500
lda tx+1
sbc #<-500
bvc :+
eor #$80
:
bpl :+
lda #>500
sta tx+0
lda #<500
sta tx+1
:

Which of these two pieces of code do you think took more time to write? Which one of these is more likely to have an error in it? Which one of these would be easier to make a minor change to? If there was an error in one of these, which ones would be easier to spot and fix?

Consider also that a scope of a whole program is thousands or tens of thousands times the size of this example. This is also a very small example, a lot of tasks you would need to do are much more complex.
Re: Measuring the time that a function takes
by on (#158254)
rainwarrior wrote:
A very simple example of why C is quicker to both read and write:

Most of the problem in this comes from the fact that you're using 16 bit numbers, but it still wasn't the hardest to figure out. 90% of my problems come with trying to think of a solution to something, not thinking of how to write it out, and C won't help there.

rainwarrior wrote:
If there was an error in one of these, which ones would be easier to spot and fix?

I'd think it would be easier to fix the one where you knew exactly what you were doing, because you made it from scratch. :/ I just always see these weird glitches in games and wonder how a human could have created any code that would cause some sort of weird side effect like that. I just don't like to try to give the computer too much to do with my code, because of... The WLA incident... (shudders)

Just saying though, if the code truly does convert that well, than what's with all this talk about how C generates inefficient code?
Re: Measuring the time that a function takes
by on (#158255)
@DRW, Re:OP

I like to (in assembly) write to an unused bit of RAM just before and just after the action in question, and then run it in FCEUX with the debugger set to look for reads to that RAM address. It tells you how many cycles have passed between breaks.

In assembly:
Code:
inc $ff


In C:
Code:
++*((unsigned char*)0xff)


The debugger also tells you what scanline and pixel you are currently on.
Re: Measuring the time that a function takes
by on (#158257)
Quote:
C generates horribly slow code


Probably obvious to everyone here, since we talk about this alot...but if you follow all the "optimization" C tips listed on Shiru's website*, the C code is only slightly less efficient (not counting the human inefficiency of the programmer having to type extra stuff to generate that more efficient code :wink: ).

* https://shiru.untergrund.net/articles/p ... s_in_c.htm
Re: Measuring the time that a function takes
by on (#158262)
Espozo wrote:
rainwarrior wrote:
A very simple example of why C is quicker to both read and write:

Most of the problem in this comes from the fact that you're using 16 bit numbers, but it still wasn't the hardest to figure out. 90% of my problems come with trying to think of a solution to something, not thinking of how to write it out, and C won't help there.

Try to write (to completion) a couple of moderately sized programs, and you're likely to change your opinion. High level programming languages didn't appear by coincidence.
Re: Measuring the time that a function takes
by on (#158265)
Ehh, maybe I would, but I just always feel like I write code in about the same amount of time it takes me to come up with complex solutions to problems that I am coding as I am thinking of them. (that's a mouthful) It's the same way in how I don't see why people freak out so bad about being able to type fast, because when I'm writing an essay or something, I take just as much time to think of words as I do to write them. Now, on the other hand, if I'm just copying writing down (and I'm not copying and pasting), it would be faster if I could just type faster, but that usually never happens. I just don't feel that the actual coding aspect of coding has ever really been a problem outside of tedious stuff similar to that. It's mostly been about like how can I dynamically allocate sprites in vram, using as little tile space, bandwidth, and processing as possible, and I don't think that C will automatically help me there.

Of course, I've never even coded in C, so I wouldn't know.
Re: Measuring the time that a function takes
by on (#158267)
Espozo wrote:
I don't see why people freak out so bad about being able to type fast, because when I'm writing an essay or something, I take just as much time to think of words as I do to write them.


Not so much for composition, but for data entry and dictation work. Back when I was in school we were still using computers to copy text from written paper! :shock:

Secretarial work requires a minimum of 70 WPM, right?

I was hunting and pecking until I actually learned to type in high school, and I used computers a lot, so no telling how much time I wasted. I feel like kids now probably learn to do this in elementary school.

Quote:
I just don't feel that the actual coding aspect of coding has ever really been a problem outside of tedious stuff similar to that.


I'm sure there are better examples for SNES, but take NES for example. Doing multiplication or division would much much longer to think about, to type, to implement, in assembly versus C. Coding in C may produce a less efficient method of multiplication than you could make in assembly, but if you didn't need a complex program, you could type out multiplication in a single line of code.

I'm saying this, but I also plan to develop my game entirely in assembly. However, I would like to learn C someday if for no reason other than building game development tools.
Re: Measuring the time that a function takes
by on (#158268)
darryl.revok wrote:
Secretarial work requires a minimum of 70 WPM, right?

minimum of 70? I average around 45... :lol: Some kid at my school actually typed a paragraph at over 120 WPM before. :shock: (I saw it.)

darryl.revok wrote:
I feel like kids now probably learn to do this in elementary school.

Nope, still high school, or at least for me. I knew how to use a bulk Microsoft Office when I was about 6 though. (I went to a different elementary school than high school)

darryl.revok wrote:
I would like to learn C someday if for no reason other than building game development tools.

I just imagine actually writing something to conform with Windows would be a pain. (I assume that there's already been C stuff made for this before.)
Re: Measuring the time that a function takes
by on (#158273)
Typing speed is hardly the issue. More code requires more reading, more thinking.

This is not really that much of an issue when you're first writing a new piece of code. Usually at that moment you know what you want to do and how you're trying to do it. It's a huge issue when you need to go back and make a change somewhere in the middle, or when you made a mistake somewhere and you're not sure where, or you need to remember how 10 other pieces of code in other parts of the program have consequences entangled with how it works.

At this point, you need to remember exactly what that code is supposed to do, how it does it, and figure out what it is actually doing, and whether that's different than what you expect. All of these things are difficult to evaluate, and it's why skilled programmers are generally in high demand. If you wrote the code last week, last month, five years ago, or someone else wrote it entirely, you've got your work cut out for you.

The example I gave is just a small scale. To understand the real problem you have to imagine a whole program that looks like. Evaluating and understanding 1000 lines of C is much easier than doing the same with 10000 lines of assembly. This is just a plain fact.

You can't just look at a small example and say "well, I understand all those bits, that's easy". Of course it's easy to understand a small example. That's why it's a small example. That's precisely why I kept the example small. I wanted you to understand the code. The point wasn't whether you understood what a 16 bit comparison looked like. The point was YOU HAD TO SPEND SLIGHTLY MORE TIME READING IT TO KNOW THIS.
Re: Measuring the time that a function takes
by on (#158275)
Espozo wrote:
Some kid at my school actually typed a paragraph at over 120 WPM before. :shock: (I saw it.)

Some guy near my house actually drove on the road at over 60 MPH before. :shock: (I saw it.)

I would like to point out that C's inefficiency, as perceived here, is strongly tied to the CPU in question, the 6502. It's been discussed before why it's a poor target choice for a C compiler. For many later CPUs with registers more appropriately sized for holding pointers that can cover the address space, the gap is narrowed strongly, and much more so when good optimization comes into play, which is another thing existing 6502 C compilers will have trouble doing effectively.

A big, nearly unavoidable disadvantage of writing anything in assembly is that it is locked to the instruction set for which it's written. You simply can not quickly bring a program written for the Z80 and bring it to the 6502 without substantial reworking. That's an easier example. What if you had to port from an IBM System/360 to a PDP-8? FORTRAN, COBOL, C, and all their friends before and after were developed partially to solve that problem. That doesn't mean assembly is without use today, but you'll find it more running (some) drivers on a modern computer, (some) embedded platforms, and (some) inner loops in game engines. With multiple decades of research in optimization on modern platforms, and innovations like LLVM, C is used for many of these things and in many cases will beat an assembly implementation of the same work.

I have a feeling this has strayed a bit off topic
Re: Measuring the time that a function takes
by on (#158296)
dougeff wrote:
I like to (in assembly) write to an unused bit of RAM just before and just after the action in question, and then run it in FCEUX with the debugger set to look for reads to that RAM address. It tells you how many cycles have passed between breaks.

So if the subroutine that you're trying to measure is run once or more per frame, how do you feed controller input to the emulated program between breaks?
Re: Measuring the time that a function takes
by on (#158301)
tepples wrote:
dougeff wrote:
I like to (in assembly) write to an unused bit of RAM just before and just after the action in question, and then run it in FCEUX with the debugger set to look for reads to that RAM address. It tells you how many cycles have passed between breaks.

So if the subroutine that you're trying to measure is run once or more per frame, how do you feed controller input to the emulated program between breaks?

Perhaps in this case subroutines are run in a test case, and profiled outside of a gameplay context. Some scripting with an emulator and this specific memory write could result in a usable unit test suite.
Re: Measuring the time that a function takes
by on (#158302)
tepples wrote:
dougeff wrote:
I like to (in assembly) write to an unused bit of RAM just before and just after the action in question, and then run it in FCEUX with the debugger set to look for reads to that RAM address. It tells you how many cycles have passed between breaks.

So if the subroutine that you're trying to measure is run once or more per frame, how do you feed controller input to the emulated program between breaks?

There's actually two good ways I have used:
1. Use a gamepad for input and just hold it during the break.
2. Use FCEUX's TAS feature to play input for you. (Takes a bit of learning.)
Re: Measuring the time that a function takes
by on (#158306)
Quote:
So if the subroutine that you're trying to measure is run once or more per frame, how do you feed controller input to the emulated program between breaks?


So far I've mostly used this to calculate how much Vblank time I have left, or how many cycles I have left before and after a Sprite 0 hit. You can set auto-button presses in FCEUX if you need, but it's tricky.
Re: Measuring the time that a function takes
by on (#158322)
mikejmoffitt wrote:
Espozo wrote:
Some kid at my school actually typed a paragraph at over 120 WPM before. :shock: (I saw it.)

Some guy near my house actually drove on the road at over 60 MPH before. :shock: (I saw it.)

What, is typing at that speed not impressive? :oops:

rainwarrior wrote:
If you wrote the code last week, last month, five years ago, or someone else wrote it entirely, you've got your work cut out for you.

I guess that's why I write crazy long informative labels, (hence: "DMAPaletteRequestCounter") although it then comes down to speed, which at that point, it comes down to typing speed, so I guess it isn't totally irrelevant. :lol: I won't argue that C is faster though, I'm just not sure if it's speed to make outweighs the code it makes slowness, especially here. I don't know about you guys, but I just like being able to visualize exactly what the code makes the machine do and that what the machine is doing is all because of what I did, which I know is irrelevant, but it seems that not a lot of other people share this feeling.
Re: Measuring the time that a function takes
by on (#158357)
Espozo wrote:
I don't know about you guys, but I just like being able to visualize exactly what the code makes the machine do and that what the machine is doing is all because of what I did, which I know is irrelevant, but it seems that not a lot of other people share this feeling.

It depends.

When I do low level stuff like writing to the PPU, I want to be able to control everything as well, that's why I always use Assembly in these cases.

But when I do mere mathematical stuff:
a = 3 * b + c / 2 - 1
or logical things:
if (x < 0 && e == 1) AddOpponent();
I don't really care about the underlying machine instructions.
As long as I don't suspect the compiler of having a bug and doing an incorrect calculation and as long as these instructions aren't a bottleneck in my code, why should I care how exactly the calculation is done?

Sure, on the NES it's a special issue since you really have limited CPU time, but I would never write anything in Assembly when I program a game on a PC.
Re: Measuring the time that a function takes
by on (#158358)
I kind of mean that I have the satisfaction of knowing that what appears on the screen is completely my doing, not that some computer wrote something based on general commands I give it. I imagine many people here are here for the fact of actually making a game, and although I'm here for that too, I'm also here to have fun coding, and I feel like coding in a high level language would take some of the fun away. Sure, it's good for if you go to an office and have to write code all day as it's your job and you just want a paycheck, but it isn't something I'd want to do for fun. I'm also attempting to prove that something more technically impressive/better looking can be done on the SNES than what has been achieved, (if that's possible. It's recently been "discovered" that two of what are considered to be the most technically impressive games on the system only use SlowROM, so I'd say yes, and there's almost always room for improvement) because I feel that it's a system that has been shat on unfairly. Now, for the M92, that because I feel like it would be interesting to look at another processer architecture, and there's always the novelty of making an arcade game, especially on hardware Irem used.
Re: Measuring the time that a function takes
by on (#158360)
I understand the appeal of having done it all yourself, and having an in-depth understanding of the platform. I think everybody should work with some assembly at least on one project to get a stronger appreciation for how the computer works. However, with that sort of knowledge comes the power to more responsibly use a higher level language and understand the consequences of your code's design. For embedded platform development (of which all of these old consoles and arcade boards are examples), hardware-specific drivers can benefit from being written in assembly, while game logic and less hardware-specific things can afford to be written in something higher level. If you follow a design like that, you might find portability between two platforms (like, gosh, the SNES and Genesis!) with a simple driver rewrite and some design changes. Obviously it gets more complicated than that, but it means not rewriting all of your game logic.

I feel like your response about languages like C being good for office work is a little premature. It is great that you are enjoying learning assembly and system architecture at a young age, but coming to a conclusion like that comes off as a little over-reaching to me.

If you want a reference for some faith that a '90s arcade platform can run a game written in C, just about all of the Capcom fighters on CPS2 have a lot of their logic written in C (this is a mix of word-of-mouth, the fact that nearly perfectly-reproduced gameplay logic appears on multiple platforms (Alpha 3 on CPS2 --> Saturn --> Dreamcast --> PS2), and a hint of good ol' conjecture).

M2 wrote Fantasy Zone II DX for Sega System 16C in C, save for some graphics code that was done in 68000 assembly. The game runs great, and nothing obviously suffered as a result. Plus, the game received a perfect port to Nintendo 3DS. C came through as a portable language!

As poorly as Sonic Spinball runs on Sega Genesis, we must remember that the game was made in four months, and C compilers in the '90s were much harder to work with and had substantially less advanced optimization. Although the game was molasses on the Genesis, the use of C let the Game Boy Advance port used in Sega Smash Pack be a port, not a rewrite.
Re: Measuring the time that a function takes
by on (#158362)
mikejmoffitt wrote:
If you follow a design like that, you might find portability between two platforms (like, gosh, the SNES and Genesis!)

I feel like on these systems that have their own strengths and advantages, it would be best to maybe make a similar game, but have it better based around the strengths of that system's hardware. I guess you could port a bulk of the code over though. I just see a lot of games on both systems being downgraded to be exactly the same on both, like ports across the Genesis and the SNES that only use about 64 colors on the SNES port, and may have 256 pixels wide in mind for the Genesis or use part of vram for sprites (you can try to dynamically switch it out to where it seems you have as much space as you could ever need, but that's a pain in the butt, as I can tell you...) when more can be used or whatever.

mikejmoffitt wrote:
I feel like your response about languages like C being good for office work is a little premature. It is great that you are enjoying learning assembly and system architecture at a young age, but coming to a conclusion like that comes off as a little over-reaching to me.

Yeah, sorry. I guess I meant to say that it's for getting whatever you want to get done, as if I were 100% focused on just making a game and could care less about actually programming it for the sake of programing it, then you'd go with C over assembly. At least, that's how it seems to me. I don't know if that's how other people feel.

mikejmoffitt wrote:
If you want a reference for some faith that a '90s arcade platform can run a game written in C, just about all of the Capcom fighters on CPS2 have a lot of their logic written in C

I don't think fighting games are particularly known for being processing intensive. What scares me is this... https://www.youtube.com/watch?v=lw8yq1ubQG0 And no, I don't hate Metal Slug. :roll: I think you already know what game I'm talking about now. :lol: (the 15fps 2D action game) I still can't fathom how having it screw up that bad is possible though.
Re: Measuring the time that a function takes
by on (#158369)
mikejmoffitt wrote:
M2 wrote Fantasy Zone II DX for Sega System 16C in C

And I'd assume that Konami also wrote II DX in C.

Quote:
Plus, the game received a perfect port to Nintendo 3DS.

The 3DS is also a shload faster. Each 68000 instruction outside drivers could probably have been translated to the equivalent C without much problem, letting the compiler's optimizer sort it out.

Quote:
As poorly as Sonic Spinball runs on Sega Genesis, we must remember that the game was made in four months

I made an NES platformer in assembly language in six months part time, doing all code but the audio myself, and it could have been four if the art was on time.
Re: Measuring the time that a function takes
by on (#158376)
Espozo, I don't know if you're aware of this, but the main reason Metal Slug 2 is as slow as it is due to a very, very stupid bug.

Retro dev is where I get my assembly language fix - I find it really satisfying and fun. But my Master's is also in programming language theory, so I kind of take the dismissal of higher level languages as something of a personal affront. Not to sound condescending, your dynamic VRAM problem is basically a non-issue when using modern languages (not necessarily C) which have things like dynamic allocation, garbage collection, and hash tables built-in. But this doesn't make programming just busy work - on the contrary, it frees your mind to work on even more difficult, interesting problems. Programming languages aren't just means of expressing your thoughts but also tools that open up new ways to think, and - even if you enjoy ASM work - you're really doing yourself a disservice as a programmer if you close your mind off to them.
Re: Measuring the time that a function takes
by on (#158377)
Espozo wrote:
I feel like on these systems that have their own strengths and advantages, it would be best to maybe make a similar game, but have it better based around the strengths of that system's hardware. I guess you could port a bulk of the code over though. I just see a lot of games on both systems being downgraded to be exactly the same on both, like ports across the Genesis and the SNES that only use about 64 colors on the SNES port, and may have 256 pixels wide in mind for the Genesis or use part of vram for sprites (you can try to dynamically switch it out to where it seems you have as much space as you could ever need, but that's a pain in the butt, as I can tell you...) when more can be used or whatever.

Almost all of the time, those aren't codebase ports, and are more akin to remakes. Another World is an example of a port. The game runs in a VM designed by the game creator. By porting the VM to other platforms, the game is faithfully brought to other platforms. That's even more high level than writing a game in C.

Tepples wrote:
The 3DS is also a shload faster. Each 68000 instruction outside drivers could probably have been translated to the equivalent C without much problem, letting the compiler's optimizer sort it out.

"Translating" to C from 68000 assembly is not practical and would require lots of manual work to make truly portable code out of it. M2 has stated clearly the game is written in C. The point isn't performance, but that the game could be ported to a new native platform and be easily updated and modified along the way.


Tepples wrote:
I made an NES platformer in assembly language in six months part time, doing all code but the audio myself, and it could have been four if the art was on time.

That's not a fair comparison. Spinball is a larger game, and development now in any language is easier than development in the '90s in any language. Again, though, not the point - Spinball was ported to GameBoy Advance without a total rewrite because it was written in C.
Re: Measuring the time that a function takes
by on (#158381)
mikejmoffitt wrote:
"Translating" to C from 68000 assembly is not practical and would require lots of manual work to make truly portable code out of it.

I remember reading an article/interview saying that SEGA made a tool to perform this translation, so they could release Sonic CD (and S3&K?) for Windows. I have no idea how portable the resulting code was, probably not very.
Re: Measuring the time that a function takes
by on (#158385)
Espozo wrote:
What scares me is this... https://www.youtube.com/watch?v=lw8yq1ubQG0 And no, I don't hate Metal Slug. :roll: I think you already know what game I'm talking about now. :lol: (the 15fps 2D action game) I still can't fathom how having it screw up that bad is possible though.

It's a mistake to think that performance problems are simply caused by using C, or that they can simply be solved by using assembly.

Metal Slug 2 has performance problems because performance wasn't made a priority objective. It doesn't matter if you used C or assembly or whatever, you can ALWAYS create situations in the game capable of bringing your engine to its knees. Making a high performance game isn't just the compiler's job, it's the job of everyone on the team. Artists, designers, programmers, etc.

Management actually has to tell people "our game must run at 60fps wherever it can", and call out people to test the game and measure it, and fix problems as they occur (sometimes by changing code, but probably more often by changing level design).

If it's not a priority, management is more likely to say "no, don't work on that spot where it slows down, it's good enough. Work on --- instead." If, as a programmer, you were to abandon your task and start hand-optimizing stuff because you think performance is more important than what the people who are paying you told you to do.... you're not likely to keep your job long.

adam_smasher's link is very informative. That particular huge slowdown bug is not the result of using C or assembly directly, but the result of an error in logic made by the programmer. (You can make mistakes in any language.)
Re: Measuring the time that a function takes
by on (#158389)
That is a big part of why Metal Slug 3 with Metal Slug 2's data structures results in a much better experience called Metal Slug X.

Tokumaru wrote:
I remember reading an article/interview saying that SEGA made a tool to perform this translation, so they could release Sonic CD (and S3&K?) for Windows. I have no idea how portable the resulting code was, probably not very.

That's interesting, I'd like to read about that. S3&K for Windows is a minimal emulator that triggers sound cues, though, not a translation.