While we encourage our analysts to rely first and foremost on some form of intrinsic valuation, we recognize that multiple analysis plays an important role in supporting and sense-checking conclusions. This guide is intended to help analysts navigate the pros and cons of using such multiples.
The use of multiples has a bad reputation in the industry because they are so open to abuse. The simplicity of a single number is seductive to fund managers and many unscrupulous analysts will engineer the answer they need by picking the right peer group or the right metric. Asked why they chose that multiple or peer group, a smart analyst will always be able to come up with a plausible answer. We would rather encourage analysts to exercise caution when selecting a peer group or multiple, not using it as a sales-tool but as a means to triangulate other findings. As a fund manager I always found that those analyst who had performed an intrinsic valuation were far better equipped to understand and explain why differences exist in multiples from one company to another.
Choosing the right multiple
The right multiple to analyze will vary according to the sector and the stock, but typically an analyst will want to look across a range of metrics. The following provides a brief guide to the most commonly used:
P/E is the most ubiquitous of all multiples due to it’s enduring popularity with fund managers. It appeals partly because of the ease of calculation (consensus EPS forecasts being widely available), but also because EPS reflects a measure of the final value that accrues to the investor after all deductions. The key drawback with P/E ratios is that a like-for-like comparison between peers is difficult because of differences in capital structures and wide variation in accounting policies.
EV/EBITA allows a comparison at the enterprise level that is not impacted by differences in capital structure, leading to better comparability. Because of this it is the favored multiple of many valuation textbooks. Nevertheless EV/EBITA still suffers from the second problem – lack of comparability due to differences in accounting policies. In particular depreciation methods can vary significantly between peers (in any case, the depreciation figure is only ever a rough approximation for the cost of capital spending).
EV/EBITDA remains popular on the sell side, mainly because they get to employ their favorite P&L metric, EBITDA (for some unknown reason, many analysts still seem to view EBITDA as a good cash flow proxy). While EV/EBITA makes an imperfect estimate of capital intensity, EV/EBITDA ignores capital intensity altogether. Because of this I find it has limited use – a business that has a high spend on capital equipment will always look erroneously cheap because this ongoing cost to the business has been ignored.
P/FCF is also common on the sell-side. The advantage of this metric is that it factors in the cash cost of capex, rather than approximating it through depreciation. However consensus data for FCF is non-existent and each analyst will tend to measure FCF in different ways. Furthermore because capex is lumpy from one period to the next, the result will vary drastically depending on what year is chosen. Some analysts get round this problem by substituting a “normalized capex” figure. This can be a good metric where the analyst derives their own figures, but it makes it impossible to compare between two different analysts.
EV/ Sales has limited value because it makes no reference whatsoever to the profitability of a business. It should only be used as a last-resort for businesses that have not yet reached profitability or where margins swing around wildly. It only makes sense if the analyst expects margins to eventually settle close to where the peer group already sits.
P/BV is primarily used for financial companies. Analysts will compare P/BV with ROIC/WACC. Where the ROIC of the bank is sustainably high or where growth is strong we would expect a bank to trade at a high multiple of book. However where ROIC is in line with WACC, we would typically expect the bank to trade much closer to 1x P/BV (or even below this).
Choosing the right year
The next choice is then which year to use. Using historical year or trailing twelve months data has merits in that it avoids the need to use unreliable analyst forecasts. However historical data is inevitably distorted by non-operating and exceptional items, and particularly in the case of cyclical companies, is unlikely to represent a normalized year.
The standard practice on the sell-side is to focus on current year multiples in addition to 1-2 years forward. While this relies on the use of forecasts, it has the benefit of allowing for a degree of normalization in profits. The key danger here for growth companies is that analysts tend to overestimate growth, leading to a company looking much cheaper than it really is 1-2 years out.
Choosing the right peer group
Multiple comparison tends to lack credibility because no two companies have the same fundamentals. Even where two companies seem to operate in the same narrow business segment there will always be differences in terms of mix – be it geographical, product or customer mix. As a result these businesses will have different growth rates and different ROIC, and will therefore warrant different multiples.
Many analysts fail to recognize this diversity, choosing too wide a peer group and putting too much weight on these superficial comparisons. In some circumstances the peer group has blatantly been selected to make the stock in question look good. There’s not much you can do about the lack of comparability, but you can mitigate the issue by choosing your peer group wisely. Being able to understand and explain these differences is almost more valuable to an investor than the comp table itself. Good analysts will add key metrics to their comparison table, such as expected growth rate and ROIC, so that the degree of comparability is more obvious.
Where a company has a number of diverse business lines, a SOTP (Sum of the Parts) is sometimes used. Here an “industry” multiple is a applied to each business line and the total compared to the current enterprise value. This can be a valuable exercise to highlight value, but taken out of context it’s often misleading. There is just too much scope to slice the company the way you want and to adopt a multiple that suits your case
As well as comparing a company against peers it can also be useful to compare against the company’s own history. While in isolation it’s unlikely to be much use as a valuation tool, it may help to provide context and support for your investment thesis. Under what circumstances has the company traded at particularly low/ high multiples? How do current conditions compare to those in the past and what are the chances that conditions revert. Asking these kind of questions will add depth to your research that goes beyond an analysis of the current environment.
So by all means, use multiples as part of your research and take advantage of the broad scope of analysis that is possible. When done correctly it will provide an additional dimension to your research that will complement your other findings. But never let the simplicity of a single multiple lull you into a false sense of security. Like many other areas of equity research, you need to remain subjective and vigilant against bias. There are already too many sell-side analysts peddling a distorted version of the truth through the superficial and promotional use of multiples – make sure you don’t become one of them!