In his new post Michael Kimsal shares some of his thoughts about framework benchmarking especially in the context of speed.
I've followed the techempower benchmarks, and every now and then I check out benchmarks of various projects (usually PHP) to see what the relative state of things are. Inevitably, someone points out that "these aren't testing anything 'real world' - they're useless!". Usually it's from someone who's favorite framework has 'lost'. I used to think along the same lines; namely that "hello world" benchmarks don't measure anything useful. I don't hold quite the same position anymore, and I'll explain why.
He goes on to talk about the purpose of using a framework and what kind of functionality they should provide. The usefulness of a framework is measured in what tools it provides and how easy it makes them to use. Benchmarks are only about speed, performance and overhead.
What those benchmark results are telling you is "this is about the fastest this framework's request cycle can be invoked while doing essentially nothing". [...] These benchmarks are largely about establishing that baseline expectation of performance. I'd say that they're not always necessarily presented that way, but this is largely the fault of the readers.
He refutes some of the common arguments about increasing performance of an application using a framework (like "just throw hardware at it"). He points out that, even with other improvements, it may come to a point where your framework of choice has become too slow and you need to move on. Think about maintainability too, though, and what you're switching from or to when considering making a move.