Financial Management: Final Thoughts
Sixth and final post in my management series. This one does not really offer actionable insights or tools you can use; I am writing this to respond to questions I was asked by readers of the previous posts.
In particular, I want to focus on the following questions:
- To what extent are the concepts I’ve discussed in these posts applicable across different industries?
- A lot of the tools and ways of making business decisions that I’ve covered could be automated. Why haven’t they been?
- All the tools I’ve shared are Excel-based; don’t companies use big data / more sophisticated tools?
Differences across industries
It’s true that I learnt about the vast majority of concepts I covered in this series through my work at P&G. This does not mean that these concepts are only relevant to fast-moving consumer goods, or even only relevant to companies that sell physical goods.
To demonstrate this, here is a quick recap of the things I discussed, and a few examples of how they apply to different industries, including non-manufacturing ones:
In the first substantive post in the series, I explained that revenue growth can be decomposed into volume (selling more units), price (charing more for each unit) and mix (selling more premium units). Managers who understand this realise they have three levers to pull to grow sales.
As I wrote, this concept is relevant in every single industry. Consider Google: Google makes its money by ‘selling’ ads. It can grow its revenue by selling more ads, making more money for any given ad, or selling more expensive ads (say, those advertising trips to Hawai instead of those advertising trips to Brighton). True, Google has less control over each of these compared to a company like P&G: the number of ads it sells depends on consumers’ search behaviour; unlike a retailer who can influence volume by changing price, consumers do not really see the cost of the ad. Similarly, the kind of ads Google sells depend on what users are searching for — you won’t get an ad for a Louis Vuitton bag if you searched for vets near you. And finally, Google does not set prices; prices are determined by auction.
Nevertheless, there are things Google can do to grow volume, price and mix: it can build algorithms that do a better job matching ads to users’ queries — buildling on the previous example, if you did see a Louis Vuitton ad when you were looking for a vet, you wouldn’t click on it. Since Google makes money when people click on ads, if it serves more relevant ads, it can increase volume.
It can also improve mix, by creating more valuable ad space — for example, on YouTube or other properties, such as Play. And though Google does not set prices, it can convince more advertisers to invest in online ads, thus increasing competition in auctions, hence driving prices up.
The same principles apply to any industry I can think of — from media agencies (which can target more clients (volume), offer more premium services (mix) or simply increase prices (pricing, duh)) to space travel (more rockets (volume), better destinations (mix (trip to Mars!)) or more expensive tickets (obvious)).
In the second post, I talked about how to determine the optimal price to charge for a product, and the different ways through which one can set a price (besides changing the list price).
The first of these, the way of setting the optimal price, is the same across industries; it’s a mathematical identity that profit is maximised when marginal revenue is equal to marginal cost. Setting prices this way in some industries, where marginal costs are almost zero, may look strange; it’s also true that in the case of companies like Google, which does not directly set prices, reaching the optimal price is hard to do.
Nevertheless, keeping this concept in mind is important: if you are a manager, you should do everything in your power to get marginal revenue as close as possible to marginal cost. This may mean drastically reducing prices you charge, and it may mean doing this will sound crazy. But trust me, this maximises profit. It’s worth hammering this in, because there are so many companies that simply take their costs and charge a mark up. This approach is barbaric, archaic, unprincipled, ungrounded in any kind of reasoning — if your company does this, do something about it.
The second part of the post, on different ways to set prices, is indeed not applicable in many cases: you can’t really play with sizing when charging a price for a car. But you should not dimiss pricing tactics as inapplicable in your industry out of hand. There are many tactics not in wide use outside of FMCG that could work. I’ve never seen a ‘buy x for y’ promotion for hotel rooms, but I don’t see why they couldn’t work well: if my friends and I are looking for a place to stay, we might take advantage of such promotions. So my advice to readers is to think outside the box — do not reject something because it’s never been tried in your domain.
This post focused on ways to reduce costs, and how to think about cost allocations. The first part was exclusively on COGS, and it’s true that COGS are not very important in services. Nevertheless, I think some of the advice I shared can be applied across industries.
For example, I talked about how scale is a good way to reduce costs. This is true for costs other than COGS. I also shared a methodology for reducing raw and packaging materials costs, by listing every single material used in production, and bechmarking across sizes and variants, challenging marketing to justify the use of non-functional ingredients etc. A similar approach can be taken to non-COGS expenses, such as marketing costs. You may want to add additional metrics — e.g. ROI, or reach — but the basic process would be similar.
A central point in the post was the distinction between gross margin and gross profit; as I wrote before, a huge number of people focus too much on margin, thus making wrong decisions. Gross margin is far less important in services, but the point on comparing margins to profit stands: in services too, people may focus too much on percentages (e.g. fees as a % of transaction size) — ignoring the fact that a lower fee applied to a huge number may be better than a high fee applied to a small number.
Finally, the section on how to allocate costs is applicable in any company.
I don’t think I need to bother explaining why modelling principles are the same regardless of what it is you are modelling.
A friend pointed out that most of the approaches I’ve covered could easily be automated: surely it’d be easy for an algorithm to find the price that sets marginal revenue equal to marginal cost? Or to point out areas for cost savings based on benchmarking? Or to highlight the cost of non-essential ingredients?
The answer to all these questions is yes. All of these things can and should be automated. Yet a huge number of these analyses are still carried out by people like me, not algorithms. For example,
- When I was a plant CFO, I did benchmarking analyses manually; yes, there were tools I could use to consolidate the data I needed, but I still had to pull everything together myself;
- As a sales finance manager, I manually reviewed promotions to identify ones that did not pay out, or that could be made more profitable;
- As a commercial finance manager, I built my own models to understand the optimal price for certain products;
- As profit forecaster, I built my own reconciliations to show how profit evolved over time.
So why all this manual work for things that could be done by computers? In my experience, there are a few reasons:
Inertia & lack of relevant skills
Most people do not have the urge to streamline operations, and even fewer have the creativity or technical skills to do so. Time and time again I come across people performing a task that, if they could not have automated it across their entire organisation, they could at least have automated it for themselves. Whenever I challenge such people on why they have not made life easier for themselves, the answers I get are either that they had not thought of doing so, or that they lacked time — nevermind the fact that they would actually end up saving much more time in the future.
(I, on the other hand, have a double-bias towards automating things, perhaps too much so (in that I might spend too much time buildling something to run automatically, when it’s unlikely it will be needed again): first, I studied computer science, so I’m used to having algorithms do things for me, and second I am incredibly lazy and hate doing things manually (a consequence of having spent time working at my mother’s law firm; legal practice in Greece involves a lot of mundane, manual work)).
Lack of incentives
Companies rarely incentivise efficient operations. Analysts are rewarded for their strategic thinking, insightful analyses, and in good companies, potential for leadership. All these are valuable indeed, but the near-exclusive focus on those means that the person who rolls up their sleeves, does the unsexy work, and streamlines operations risks being overlooked.
(Don’t get me wrong: this is, to an extent, justifiable. A single clever insight (for example, the decision to charge a higher price, or the identification of an overlooked opportunity for COGS reductions) can result in millions of dollars in profit; it is hard to match such achievements by small incremental improvements in the way things are done.
That said, the best kind of efficiency improvement is one that results in both lower cost or effort and improved decision-making ability. What the best analysts do is come up with ways to make things better that also result in improving their own (the analysts’) ability to come up with actionable insights.)
Moreover, managers are rewarded on the size of their teams — if not internally, for sure externally (a manager whose LinkedIn profile shows they lead a team of 10 looks better than one who leads a team of 2), so they have little reason to reduce their team’s workload, unless they can redirect the effort elsewhere.
To be sure, mature companies with slow revenue growth do focus on cutting costs, and one way to do this is to make this run more efficiently. But the way such exercises work in practice is top-down: the CEO says a company needs to reduce 10% of its workforce, and this directive gets cascaded throughout the organisation — and only after a division is forced to reduce the number of its employees it is compelled to find efficiencies. So even in such circumstances, the lower levels of the management chain have little reason to volunteer efficiencies.
The situation is worse at companies where there is still rapid revenue growth, and where profit margins are high: such companies have little reason to care about cost savings or efficiencies — indeed, the stock market rewards or punishes them almost exclusively based on their top-line performance.
(There are, of course, exceptions: leaders of certain parts of an organisation that are seen as exclusively cost-drivers, such as manufacturing facilities, are of course compensated on their performance against cost-savings targets.)
Poor data & prioritisation
The final (and in my view, most legitimate) reason for companies’ inability to automate various decision-making processes is lack of good data. Take pricing in FMCG: yes, a computer could look at past sales data, compare number of units sold vs prices, take into account additional factors such as competitors’ prices, promotional activity, marketing activity, number of distribution points, ‘quality’ of shelf-space, the weather, seasonality, the impact of a pandemic, and the impact of increased solar flare activity and determine the exact price that maximises profit. But collecting all this data is currently impossible.
Even the most basic input in this long list, sales data, is hard to come by: you may have your own sales figures (and even then, many companies cannot easily track sales by, say, distribution point), but this is kind of meaningless if you don’t know what competition was doing. In mature markets like the UK or the US there are companies that aggregate Point of Sale data from retailers and sell this back to manufacturers — but you won’t find this in China where a huge share of trade goes through small, independent stores; nor will you find it in other industries.
Then you have legacy systems. If you run a family business that owns several factories, you may not have integrated the various factories’ reporting systems — and so, benchmarking across them will be a manual exercise. Given the number of competing priorities, upgrading your IT may justifiably not be at the top of your to-do list.
Unfortunately, most companies find that data — both internal and external — is fragmented. This makes it very difficult to automate stuff. However, I’m hopeful that we will see more and more data become open, portable, and freely available.
Big data / More sophisticated tools
With the exception of regression, every tool I have described in this series is almost childishly simple; in addition, all of these tools are Excel-based. A couple of readers expressed surprise at why this is: isn’t everyone on Python these days? Why do companies still use Excel in the age of big data?
It’s mostly due to same reasons preventing automation: inertia, lack of skilled employees etc. But, to be honest, my own view is that the hype for big data is overblown. Not that I don’t like big data, or that I think companies should not strive to build the capability to analyse it; but I do not think it’s the panacea people often seem to think it is.
First, as mentioned in the previous section, many companies do not have good data, and as everyone knows, in modelling garbage in = garbage out. So there’s no point hiring grads with masters in data analytics, or sending your star employees on expensive courses, if you do not have reliable data in the first place.
Second, many decisions do not need all that much data. As I wrote in the post on modelling, many models rest on a couple of inputs that have disproportionate effects on the output. You do not necessarily need heaps of data to make reasonable assumptions about the value of such variables; furthermore, tools such as Monte Carlo simulations and sensitivity analyses enable you to make decisions even if you cannot make accurate predictions on the value of some variables.
Third, and kinda related, statistics is not an easy discipline. Many practitioners make mistakes; a lot of people rely on causal analysis tools that they do not really understand — I wrote a post on regression analysis, yet I myself cannot really explain how p-values are really calculated, even if I know what they represent. This means it can be a mistake placing too much confidence on your data analysts; and unlike simple Excel models, challenging or even understanding the assumptions of more complex tools can be very difficult.
Besides, even if a data scientist really is on top of their game, they may still be unable to ask the right questions. You can have all the data you want, and all the experts to analyse it, but you will not have a competitive advantage unless you know what it is the data can tell you.
None of these are arguments against big data itself; they are mostly words of caution against over-investing in big data solutions. Having too much data you can cause analysis paralysis, or miss the forest for the trees, or [insert third cliche here]. You should train your employees to ask the right questions before teaching them how to mine big data for answers. And you should remember that any answer you get from big data is only as good as the quality of the data you mined.
Well, this is it I think. No more wisdom to impart. Happy to answer questions!