This article first appeared on Quora.com on Sept. 2021
The question is about the virtual machine (CLR, Common Language Runtime), not the framework implementation based on the CLR. Apparently, the question is not easy to answer (my answer is the first and only one so far). In fact, one hardly finds any meaningful measurement results on the web, but many unsubstantiated claims.
Both Mono and CoreCLR are implementations of the ECMA-335 and ISO 23271 standards. Both are open-source, and both are now being developed by Microsoft. CoreCLR is the successor of the proprietary .NET CLR also developed by Microsoft.
So which one is faster, CoreCLR or Mono? According to my own measurements on Windows and Linux, both achieve roughly the same performance on average; the geometric mean value of the benchmarks are no more than 15% apart. So based on my results, Mono CLR is a little slower than CoreCLR, but the measurement error is probably larger than the difference. See below for the details.
I used the .NET Framework and C# in different projects since about 2003. I also implemented a couple of programming languages and compilers. Mono has always fascinated me because the project has been open-source since the beginning, is available on many platforms and - depending on the configuration - has comparatively low resource requirements (e.g. < 10 MB for a minimal deployment).
I recently evaluated Mono for a project but was once more dismayed by claims of its supposedly poor performance. According to e.g. BenchmarkDotNet
the Mono CLR is about three to four times slower than the CoreCLR when calculating hash sums. I was skeptical, however, since I would probably have noticed such big differences in my applications, and I only ever saw individual microbenchmarks for such statements - if at all. I, therefore, decided to perform my own measurements based on a sound benchmark suite with which I already had experience from other projects.
The Are-we-fast-yet benchmark suite has its origins in scientific studies of the performance of various dynamic programming languages (see [1] and [2]). It can be used to compare different programming languages as well as different implementations of the same programming language. For someone who builds compilers, this is an important tool, e.g. to assess the effectiveness of optimization measures. Compared to other benchmark suites such as The Computer Language Benchmarks Game [3], the goal is not to win by any means, but to compare as fairly and objectively as possible.
I have implemented several backends for my Oberon compiler [4], including one for the CLR. Oberon is a strongly, statically typed programming language with a focus on simplicity. The language is similar to Pascal but has object-oriented features and a garbage collector. I also implemented the Are-we-fast-yet suite in Oberon and compiled it with my compiler for the CLR; the resulting assembler and binary files can be downloaded from my website [5]; as you can see from the assembler files only a few basic framework functions from the mscorlib assembly are used, thus the results are representative for the raw CLR performance.
The following chart shows the results measured on my Windows 10 laptop; relative performance values are shown (with a logarithmic scale), i.e. how much faster (to the left) or slower (to the right) compared to Mono 3 (value 1.0).
It is immediately noticeable, that if you only compare a single microbenchmark, you can get quite a wrong impression of the overall performance; e.g. in Sieve, CoreCLR 5 is more than twice as fast as Mono 5, and in NBody the factor is almost three.
The geometric mean of all benchmarks per CLR implementation is presented in the following diagram (see [6] why the geometric mean is used here).
As you can see, CoreCLR is actually a bit faster than Mono 3 on average, but relatively little (i.e. around a factor of 1.1, not a factor of two to four as claimed). It is also interesting to see that Mono 5 is slightly slower than Mono 3.
If you are interested in the detailed results, have a look at [7] and [8]; the latter includes the Linux results and compares the performance of the Mono CLR with LuaJIT, Node.js and Crystal; on Linux, Mono 3 achieves about the same performance as Node.js.
Update September 15th
In the course of a very interesting discussion on Reddit [9] I made additional measurements including the x64 versions of the runtimes (as far as available) and finally including the current CoreCLR 6.0rc1 runtime which was published today. The following diagram compares the geometric mean values of all benchmarks of all measured runtime versions so far:
Remarkably, the x64 performance is not better on average than the x86 performance. All in all, my conclusion remains the same; only the spread of results is slightly larger (about 30 instead of 14%). For the detailed results have a look at [10]; note that the x86 results of Mono 3 and 5 as well as CoreCLR 3 and 5 are the ones from [7] and correspond to the diagrams above.
[2] Cross-Language Compiler Benchmarking: Are We Fast Yet?
[3] Which programming language is fastest?
[4] GitHub - rochus-keller/Oberon: Oberon parser, code model & browser, compiler and IDE with debugger
[5] http://software.rochus-keller.ch/Are-we-fast-yet_CLI_2021-08-28.zip
[6] http://ece.uprm.edu/~nayda/Courses/Icom5047F06/Papers/paper4.pdf
[9] Is the Mono CLR really slower than CoreCLR?
[10] http://software.rochus-keller.ch/Are-we-fast-yet_results_windows_x64.pdf