Does it help to know "Scalability" ratings of Software when making hardware and software procurement decisions?
The answer is “Yes, of course!” as I explain below. But first, an analogy.Toys, more generally – kids toys, are typically rated based on the target age group for which the toys are recommended. There is in fact a 300-odd page document that provides the guidelines! While not based on exact science, the age rating does help in toy selection. Computer games and Holywood movie ratings likewise provide useful information about their appropriateness for consumers of different ages. Here is a typical example:
However, software products rarely, if at all, provide any information about the size of target machines on which they are designed to run. For instance, it is hard to know whether they can run on multi-core machines and if so upto what number of CPUs such software can scale. It is not unusual to not even know whether the software can even take advantage of multi-core machines even though multi-core machines are becoming omnipresent in our world! Release Notes for such software products may provide information about the Operating Systems and Microprocessors on which they run, dependencies on other software or runtime libraries or even memory requirements! But, I am yet to see mainstream software products mention whether and to what extent the products scale in the multi-core era!
Is such information necessary before procuring such software products? I certainly believe so. For client side software, this can help justify and clarify whether a quad-core machine is more appropriate than a dual-core or single-core machine. For server side software, scalability numbers can help determine whether a larger and more expensive SMP machine purchase can be justified. Using my rudimentary graphics skills, I created a quick (perhaps ghastly) graphic to illustrate what I wish to see:
Now, I do admit that scalability will vary depending on the actual workload and what portions of the software product get exercised by the workload. But it would help to know the scalabilty for typical or even extreme workloads so that software scalability capability can be shared with consumers.
Intel(R) Advisor XE 2013 Update 4 which was released in late July, 2013 and which is now available as part of the Studio XE 2013 SP1 packages helps obtain scalability information for portions of software programs (refer to “Maximum Site Gain” and “Maximum Program Gain for All Sites” in the tool show in in the image below).
This scalability information can be extremely valuable when comparing different parallelization opportunities, for sizing a machine on which to run the program once parallelized, etc. Parallel programs can take advantage of multi-core machines and justify the investment when the underlying multi-core machine can be adequately utilized.
Given the tremendous advantages of knowing the scalability of software products and the availability of tools like Intel(R) Advisor XE which can make it possible to learn scalability of key portions of software products (when source code is available), why is that software products are not marketed and procured based on their scalability? Why do software vendors not publish scalability information? Do you see the need for scalabilty numbers as well? Let me know of your thoughts – I’d appreciate hearing from you!