That’s exactly how you guesstimate CPU performance. It obviously won’t be accurate to real life use cases, but you don’t necessarily need benchmarks to get a ballpark comparison of raw performance. The standard comparison is FLOPS, floating point operations per second. Yes different architectures have different instruction sets, but they’re all relatively similar especially for basic arithmetic. It breaks down with more complex computations, but there’s only so many ways to add two numbers together.
That’s exactly how you guesstimate CPU performance. It obviously won’t be accurate to real life use cases, but you don’t necessarily need benchmarks to get a ballpark comparison of raw performance. The standard comparison is FLOPS, floating point operations per second. Yes different architectures have different instruction sets, but they’re all relatively similar especially for basic arithmetic. It breaks down with more complex computations, but there’s only so many ways to add two numbers together.