Connect with us

Global Affairs

The Increasingly Inadequate Measurement of Productivity

 31 min read / 

At the centre of a modern economy at all levels are three things: people, process, and technology. Today, there is much political and societal discussion about the nature of work from the dynamics of the global economy (supply chains, currencies, taxes, etc.) to job growth and wages. But with all the discussion, a two-part question comes to mind – what does progress look like and are is it being measured correctly?

Progress is typically measured by productivity in the macro economy through gross domestic product (GDP). Today, the way productivity is measured is inadequate and what that current measurement tells policy makers suggests they pull dated levers to stimulate growth. The ripple effect from the macro economy moves through sectors to companies and onto individual workers.

Labour participants are then hurt by the inadequate measurement. Put differently, the information inputs to GDP do not tell the whole picture for the modern economy, yet governments and policy makers treat those inputs like they do, and then act on that information. Since the information is a dated form of measurement, the reactions to the information are also dated forms of responses to course correct, and the resulting trickle-down effect hurts companies and individuals alike.

The Economy Like a Computer

One way to think about the global economy is like a system – people should think about it like one interconnected computer system. Just like computer systems rely on both hardware (physical assets) and software (digital assets) to function, the global economy relies on physical and digital as well. And these two economies – the digital economy and the physical economy – in the US and other countries – have a rift occurring from the mitosis taking place between the two with what seems faster and further apart each year (much of which generally tips the economic scale in favor of digital).

However, just as software companies, especially those acting as platforms, are deemed valuable in the private and public marketplaces, so should the reflection of their importance in GDP, which absent any variable change to the metric, would be a welcomed accounting upgrade for the existing variable. This would be step one in measuring the health of a modern economy.

Of note, society generally places more value on software (digital) companies today than hardware (physical) companies. Perhaps that is because software can be less of a commodity than hardware and because it can be developed and iterated more rapidly.

And the technology industry will continue to grow because it is still in its infancy as there are many layers that are being added to stack that are creating markets.

Interestingly enough, much of the investment activity by big companies, particularly in the Internet of things space, supports the premium value of software whether standalone or in their ecosystems.

A Brief GDP History

GDP is the brain child metric of US Economist Simon Kuznets in the 1930s. There are two ways to evaluate GDP, perhaps the most common is the expenditure approach which uses the following equation: GDP = Consumption + Investment + Government + (Exports-Imports). With variables, GDP = C + I + G + (X-M).

This equation is important because it is perhaps the single-most used statistic, certainly most mainstream, to measure the health of an economy. The problem is even Kuznets warned the metric is hardly the best way to measure the wealth of a nation as it mostly evaluates national income. By this metric, depending on perspective, the US and China are either 1 or 2 in the world.

Much of the reason why this particular metric was adopted was that renowned US Economist John Maynard Keynes also supported the measurement but with some caveats about the accounting for Government (spending). Keynes, who famously also gained prominence in the 1930s for his contrarian view on government spending in a bust cycle which a played a role in helping the US recover from the Great Depression, argued the Government (G) portion of the equation needed revision from private sector to public sector to ensure accurate measurement of a growing economy post-World War II. From there, the GDP was codified in the 1950s as US Economist Richard Stone presented the metric to the United Nations, who adopted it. In wartime era, the GDP was more about supply. In the post-war era, the metric was more about demand.

The Insufficiency of GDP

One of the key problems of the GDP metric is it was not designed to account for the changing layers to the economy in the future. It was principally designed and had a bias to account for a manufacturing economy. It did a poor job measuring services, finance, and now it does a poor job measuring our digital economy.

Economists know this and every so often, perhaps every decade, they address the shortfalls of GDP and with the advances in their field, they re-evaluate the economy. With those advances, they can take a new base year of a particular economy, which is re-basing, and use new measurements on backwards-looking data to gain new insights about an economy. There is nothing wrong with re-examining history, however, the result can be revisionist history, and perhaps in there is some controversy.

Interestingly enough, there are alternatives to GDP that probably do a better job measuring the relevance of an economy. One such metric, developed by Harvard Economist Michael Porter, is the Social Progress Imperative which exists to compliment GDP rather than ignore it altogether. The index from it, the Social Progress Index, measures 54 distinct welfare indicators which broadly measure three things of a country: its ability to provide for its people’s most essential needs, its ability to provide the building blocks for individuals and communities to enhance and sustain well-being, and its ability for all individuals to reach their full potential. By this metric the US and China are not 1 or 2, respectively, rather there are at least 10 countries ahead of the US which are all the Nordic countries, New Zealand, Australia, and Canada.

Some economists today are experimenting with various proxy metrics for measuring the digital economy as some physical products have become digital services. And with technology on an increasingly digital path, the inadequacy of the way GDP, a metric from the 1940’s, measures on its own for the modern economy has a rising gap. Perhaps there should be other numbers/metrics, similar to the spirit of the Social Progress Index that account for the health and productivity of today (and maybe tomorrow). Too much stock is placed in one number (GDP) – and as previously mentioned typically governments react to manage this one number by pulling certain levers which has a trickle down effect to industries, companies, and eventually individuals.


Ray Dalio, founder, Chairmen, and Chief Investment Officer at Bridgewater Associates, the world’s largest hedge fund, made an excellent video about his eBook, Economic Principles, to explain his views on the economy and how it works. In both he begins with his view of the economy as a machine. He also goes on to discuss how growth occurs. There are three ways: short-term debt cycles (5-8 years), long-term debt cycles (75-100 years), and productivity. He further dives into the role of credit and debt throughout those time periods explaining it is the catalyst for the cycles. He also explained that productivity increased at an average of just under a 2% rate over the last 100 years due to increases in knowledge.

Carlota Perez, in her book Technological Revolutions and Financial Capital, discusses each industrial revolution as well as the basic construct of how they occurred. First, she stated there have been five industrial revolutions and that we are living in the fifth – the age of information and telecommunications which began in the early 1970’s and continues to today. Inside of each industrial revolution, which typically lasts 50-60 years, there are two periods – each lasting roughly the same length of time (25-30 years) – an installation period and a deployment period. Each of those periods are marked by two distinctly different phases. In the installation period, the two phases are irruption and frenzy, there is then a turning point, and the deployment period begins, which also has two phases – synergy and maturity. Notably, with technology, there can be multiple innovation clusters occurring at the same time, where society could be reaching maturity with one technology class but be in the irruption with another technology class.

Perez’s long-wave time horizon for technological revolutions is close to the same length (on the low side) as Dalio’s long-term debt cycles. And in deconstructing the nature of how technological revolutions happen there are a few key pieces to know as it relates to the dual nature of technological revolutions. First, there is a cluster of new dynamic products, technologies, industries, and infrastructures which generates explosive growth and structural change. Second, and either simultaneously or shortly after, there new interrelated generic technologies and organisational principles which can rejuvenate and upgrade mature industries. This dual nature together is realised as a change in techno-economic paradigm and what comes out of the other end is progress. That economic progress is also realised in a dual nature – new engines of growth for a long-term upsurge of development and a higher level of potential productivity for the whole productive system.

It is there where new layers are added to the economy, which in capitalism allows for greater participation in the modern economy. In some parts of the world there have been generations of technology skipped over in a society who essentially missed the last wave and caught the next one (i.e. the continent of Africa who broadly missed on analogue connectivity with land lines but caught on digital connectivity with mobile).

These layers added to society touch upon a few key points. First, the technology stack has grown the same way (as shown above). Second, we are moving towards an eventual state where productivity gains create enough new markets and add enough new layers to the economy that most, if not all, are lifted from poverty. Third, this is caused by the velocity of money which with an increasingly digital world has accelerated since the dot-com boom. More will be discussed on this later.

Modern Productivity Gains

As far back as 1987, Economist Robert Solow observed:

“You can see the computer age everywhere, but in productivity statistics.”

The digital economy is becoming a productivity accelerator – a do more with less spend. Not surprising, right now the digital economy is growing faster than the physical economy. As a matter of fact, in 2012, according to the OECD, the information industries maintained a lead in labour productivity versus the rest of the economy. So more gains were made in the information economy (arguably the digital economy) than the rest of the economy (arguably the physical economy).

McKinsey went one step further with their study on global flows. The study introduced a new way to measure the economy, the McKinsey Global Institute (MGI) Connectedness Index. The latest index measured the inflows and outflows of goods, services, finance, people and data in 139 countries. Though not yet exhaustive of the world’s total countries, it is good indicator and step in the right direction to measure progress beyond GDP, which of the aforementioned parameters only measures inflows and outflows of goods vis-à-vis the difference between exports and imports.

They found that over a decade, global flows have raised world GDP by at least 10 percent; this value totalled $7.8trn in 2014 alone (shown in the graph below). Data flows at $2.8trn in 2014 have now surpassed global trade of goods (at $2.7trn in 2014) to become the most significant flow in the modern economy (below accounts for $2.2trn in direct effects and an additional $0.6trn in indirect effects of value creation for other types of flows).

The good thing about global connectivity is it greatly impacting global economic growth. The dark side, however, is the barrier to entry is connectivity and absent connectivity, the gap grows between the connected and unconnected economies (essentially (limited) physical and (growing) digital economies).


Global flows generate economic growth primarily by raising productivity, and countries benefit from both inflows and outflows. Here (and similar to the Social Progress Index) the MGI Connectedness Index, could be a great compliment to GDP as it offers a comprehensive look at how countries participate in inflows and outflows of goods, services, finance, people, and data.

As of July 2015 in the US, there is a new measurement for the economy: Gross Domestic Output – the average of GDP and GDI (which should be the same but are not due to measurement error – the Bureau of Economic Analysis began publishing this metric in July 2015).

GDO does a better job in real-time (GDP typically has three-month lag, GDI provides more of a view of the later estimates of output than early estimates of GDP does alone – GDO puts equal weight to both, though an average that is applied later, it shows some merit and perhaps could be considered as an indicator to be implemented in real-time.

Unfortunately, there is a problem. GDO is only as good as its underlying data (GDP/GDI), which if they do a poor job measuring the economy (not accounting for digital productivity), then we don’t accurately account for our modern economy.


Additionally, “Labour productivity growth—measured as output per hour—comes from three primary sources: increases in capital, improvements in the quality of labour, and “total factor productivity,”” according to the US Bureau of Labor Statistics. This also known as multifactor productivity and is considered the most important source of productivity growth which can be thought of as the way that labour and capital come together to produce output.


Former Netscape co-founder turned venture capitalist Marc Andreessen famously said: “Software eats the world.” Years later he revised his claim and elaborated on it to suggest that because every company is becoming a software company, the companies that will win will be those that are the best software companies in each vertical. And it makes a lot of sense because in each vertical, those that are pioneering software, specifically platforms, are the ones making all big gains (networks orchestrators).

The scary thing for market incumbents is keeping up through digital transformation with their new market entrants that were born digital and in many ways born global. The people, process and technology challenges can stunt progress making a company’s productivity seem dead in the water. They are challenged in the public markets they do not enjoy the price multiples of their digital contemporaries, which can make their market share seem to rapidly melt away (i.e. Blackberry to Apple in the smartphone market, in large part due to the platform Apple had with iTunes and, more importantly, the App Store). Every industry is being disrupted today by a technology company who typically uses a specific set of approaches that are more modern than the behemoth incumbents and those startups unbundle the capabilities of said incumbents and go after a small piece. If they do a good job, they compete and become a real company or they get acquired. In the latter, incumbents essentially acquihire talent and intellectual property to modernise blind spots.

But the real interesting thing here is the mindset of using data to drive everything which helps processes stay lean and because of that it is a wonder today data does not have its own place on the balance sheet. There are opportunities to place a valuation on various kinds of data, place a value in the form of operational expense to data, depreciate data, and to monetize data (through the sale of various types or packages) which would affect revenue. And this is where the measurement of companies falls short to some extent, at least with US Generally Accepted Accounting Principles (GAAP). It is true that not all data is useful, however, much of it is and large companies have a lot of dark data (data not being used) that they are missing out on for productivity gains.

Venture capitalists, have often used non-GAAP (or International Financial Reporting Standards) to value the health of the private companies in which they invest, and it is important to note there are differences between the two. However, once those companies cross the private to public threshold with an Initial Public Offering, they have to report on GAAP (at least if listed in US exchanges). The interesting part is this has been a point of tension for some years between the public and private markets and this year software-as-a-service companies will have the right to choose how they wish to exercise how they account for revenue recognition so long as they are private. Perhaps that is a step in the right direction for accurately measuring the effectiveness of the business model.

The other interesting thing that companies and the government alike deal with is the challenge of accounting for the dichotomy of access versus ownership. For example, with these as-a-service business models, in a business-to-business transaction, companies can account for their consumption of technology in a service orientation on the balance sheet rather than an asset depreciation orientation. This is the difference between operational expenditures and capital expenditures, and the former has tax advantages. It is still early days in accounting for this lack of ownership flow properly (physical and/or digital), hence some of the private market changes for this year.

Remedying for Solutions

The previously discussed identifies that there is a problem in the way productivity is measured in the macro, micro, and managerial aspects of the economy. The next part of this is to discuss potential solutions as to how to fix the problem.

For data scientists, there are four key thematic cognitive biases to mind. They are too much information, not enough meaning, a need to act fast, and what we should remember. In total, there are more than 170 cognitive biases of which more than 150 are unconscious biases. The devil’s advocate to these biases is human mind has evolved to use pattern recognition to shortcut problem-solving. And the following is one linear explanation to explain why. Information overload is drowning, so we aggressively filter – noise becomes signal. A lack of meaning is confusing, so we fill in the gaps – signal becomes a story. If there is a need to act fast we can lose our chance and jump to conclusions – stories become decisions. It isn’t getting easier, so we try to remember the important bits – decisions inform our mental models of the world.

Of note, this is a way to think about biases in a system 1 cognition, but there are two states of cognition. The two systems of cognitive processes which psychologists refer to as implicit guidance and control (IGT) are system 1 and system 2. The difference between the two is the difference between thinking fast and thinking slow. In IGT, system 1 thinking happens automatically, quickly, and with little effort; it is practically involuntary. In contrast, system 2 thinking allocates attention to mental activities that require effort, say for complex computations. So in returning to the analogy of the economy is like a computer system, the human compute that comprises that system is bimodal with system 1 and system 2 processing which is the mental compute that eventually shapes action from information. And it is there that applying the wrong compute in the wrong situation can result in error.

Going a bit deeper, here are some specific biases that don’t require much imagination as to why they matter. A listed summary is also below.

  1. Confirmation bias – the data feels right,
  2. Selective bias – data is selected non-random and non-objectively,
  3. Outliers – data measuring significantly above or below the normal distribution,
  4. Simpson’s Paradox – groups of data can reverse when groups of data are combined,
  5. Overfitting or underfitting – the model is too complex or too simple,
  6. Confounding variables – a relationship between two variables could be partially/entirely false due to omission of a confounding variable
  7. Non-normality – not all distributions are normal, yet sometimes statistical tests are performed as if they are.

Putting it all together, realising that data out is, in part, only as good as data in, there are bias opportunities to get it wrong from the start. Next, in the actual analysis there are further opportunities to get it wrong based on a different set of biases. And last, in post-analysis, the manner in which results are conveyed can have biases as well. This can be anywhere from selection as to how to share result – verbal and/or visual, the context for those results, and even the visual presentation of them (if shown) to name a few. And indeed, it is some of these types of biases that led to the original adoption of the GDP standard.

The Layers of the Economy Today

As mentioned before there really are two economies today, a physical and digital economy. The digital economy is the one seeing the greatest gains. A large reason as to why the digital economy has those gains is because it is easier for companies to measure the efficiencies internally and externally and subsequently act on information. As the economy continues to evolve, established companies are unwinding some of their off-shoring plays from the last 20 years and responding with re-shoring (also called in-shoring) plays to get closer to the customer in order to be more productive.

Thematically, speed is the tenant on which to compete. And with that the global landscape of how business is done is already changing. With speed being the hallmark advantage of re-shoring it is first occurring in developed economies where there is much demand, because of that the velocity of money is positioned to accelerate.

Where companies of the past were evaluated on metrics inside their sector type, perhaps now it is time to classify them by business model. By changing the way companies are classified, they can be more accurately measured which can tell a more clear picture of a company’s performance. They can be classified as one of the four following: asset builder, service provider, technology creator, or network orchestrator. The advantage of re-thinking companies to fit into one of these four models is it also enables a more clear view into how a company transitions to become a new model type or even a hybrid. For example, consider Wal-Mart and Amazon. Wal-Mart is an asset builder that is trying to become a bit of a hybrid with its push to become a network orchestrator with the recent acquisition of Inversely, Amazon is a network orchestrator who is now also a service provider with Amazon Web Services, it’s most valuable part of its company, a technology creator with the Alexa and Kindle product suites, and is considering becoming also an asset builder with its push into grocery stores with Amazon Fresh. When comparing the two, it is hard not to view Amazon as a more valuable company and indeed, the giant is probably best in class when it comes to merging the digital and physical worlds.

So really the four business models are a way to think about the four layers of the economy which highlight the differences of physical and digital strategies. With that, the other point to discuss is ownership today versus ownership of the past. Today, the rise of digital ownership (music, movies, games, etc.) isn’t really ownership at all, it’s access. Physical goods (books, real estate, cars, etc.) are typically owned but also can be rented, leased or licensed. This fundamental shift in ownership (whether for individuals or companies or even governments) gives rise to the (and difficult to account for) as-a-service business model as previously discussed. Physical goods and digital goods can now flow to customers of any type in the form of a service model, products-as-a-service. This model is a sort of a remix of various popular models of the past – rental, licensing, leasing – but is facilitated, tracked and delivered through a digital intermediary. One such example is Amazon Web Services, who serves individuals, companies, and governments alike with an array of cloud services. The key to the cloud model is recurring revenue – it is a subscription service. It is more predictable and users get much more for their spend with this model than any of the previous popular non-ownership models of the past combined. And the model scales better too.

Technology in Today’s Equation

In considering our journey through the last 15 years, a great infographic to visualise the overtake of high technology companies to become the world’s most valuable is expressed best in with the below. Interesting enough are the multiples on these companies – consider the price to revenue ratio, which in 2013 alone network orchestrators (which all five of the below are) enjoyed a ratio of 8.2 as compared to 4.8 for technology creators, 2.6 for service providers, and 2.0 for asset builder companies. And there are some other metrics to consider their value too which help explain their value such as ratios of enterprise value to revenue multiples and EBITA margin percentage.


Returning to a point discussed in the Principles section above, the velocity of money which measures the way consumers and businesses save or spend money matters and below is a view from 1959 to 2016. The chart below shows the quarterly ratio of US nominal GDP to the M2 money supply. The M2 money supply consists of M1 money supply and M2 money supply which are the money supply of currency in circulation (notes and coins, traveler’s checks [non-bank issuers], demand deposits, and checkable deposits) and saving deposits, certificates of deposit (less than $100,000), and money market deposits for individuals.


Notice a problem with this measurement? It only accounts physical money transactions, not digital. The data points to a downwards trend in spending since the mid-1990s, yet the rise of the dot coms occurred then which disrupted the tradition way money circulated throughout the US and the world. We became more connected and the below chart shows internet usage growth from the mid-1990’s to 2013, which is the inverse of the velocity of money. And even though the above represents the US and the below represents the world, there is further evidence that supports the rise in e-commerce which last year globally was over $1trn and it is expected to grow. This is because broad money, which accounts for much of the digital currency of the world is considerably larger than traditional measurements of physical money in velocity of money calculations. This is all to say the continued point that measure of the movement of value throughout the world’s economy is dated and needs revision.

Fortunately, there is a solution. Digital money is on the rise and hopefully someday it will be the standard. Bitcoin is the most popular form of digital currency today and is even considered for use by central banks. The reason why is because of how radically transparent it can be and also because it is trackable. Just as companies track user interaction, spending patterns and more on their digital platforms (if they have them), Bitcoin provides a forum to do the same. The value of the tracking capability of digital currency cannot be understated; unique identifiers for each bitcoin paired with a decentralised ledger not only ensures speed for transactions but also now for total money supply (in theory).

Today with lower barriers of entry to start companies (especially digital) paired with the rising access to capital, some markets are just being created. These markets add new layers to the economy and typically there is an analytics slant. The capital, especially for startups in the US and Europe is easier to come by vis-à-vis public, private, and/or government investment. Some recent trends to note: the supply of money in venture capital flat-lined, but private equity increased, and corporate/foreign direct investment increased particularly with the marquis $100B fund announced from Soft Bank, which includes capital from Apple and Saudi Arabia to name a few.

With the high access to capital and low barriers to entry for various markets, it is becoming more obvious that times are changing and the savviest operators and investors are taking advantage. In Platform Revolution, the authors highlight the juxtaposition of the imminent tidal changes by comparing the physical and digital economies. In the industrial era during the 20th century, giant monopolies had supply economies of scale which was driven by production efficiencies and rendered unit costs less and less for products or services as the quantities produced increased. This made barriers to entry high for new entrants (example companies are BASF, GE, and Ford). But in the late 20th century when the Internet era came to rise a new breed of companies, comparable monopolies, were created by demand economies of scale. The demand side of the economy is the other half of the profit equation to the production side and their success is “driven by efficiencies in social networks, demand aggregation, app development, and other phenomena that make bigger networks more valuable to their users.” Similarly to monopolies with supply economies of scale, these monopolies with demand economies of scale enjoy high barriers of entry as well (example companies are Google, Facebook, and Amazon). Perhaps not quite as high, but still high, and some of that might be due to their data troves about customers coupled with intellectual property – patents and processes alike.

Network effects become a really important reality to understand. Network effects are a large part of what makes the aforementioned companies so valuable. They account for a velocity of user growth. These platforms or networks are two-sided and intermediate supply and demand for a given product or service. The value of the platform quickly shows itself through its rapid ability to expand (on both sides) with ease. And here frictionless entry (joining and participation) is an important hallmark for a platform’s rapid growth.

But today, many of the valuable publicly traded companies are not organic to this dot com era model or the era that followed. And how could they be? They were born during a different time. But most of the nimble and successful companies are the ones that are migrating to become much more digital. They see the most valuable kind of business model is the network orchestrator, which also happens to be the most rare, hence the arms race to own platforms in new spaces outside of social such as the modern, more connected (with sensors) industrial world. So this begs the question – are we even classifying and by extension evaluating companies the right way anymore?

In 1937, Standard Industry Classification Codes was introduced, it was later replaced by the North American Industry Classification System and again replaced with our current system, Standard and Poor’s Global Industry Classification Standard. Currently, the five most valuable companies in the world are (alphabetically) Amazon, Apple, Facebook, Google (Alphabet), and Netflix. What do they all have in common? They are all network orchestrators. And the same is the case with the four most valuable US private companies as well – Uber, Airbnb, Palantir, and Snapchat (Snap). Looking beyond a vertical industry classification is critical to competing the today and more importantly tomorrow’s economy, as the world is only becoming more connected. Thinking horizontally to a platform model can help companies achieve greater participation with their brand. These digital companies all find value based on lower costs, higher user growth, and higher profits. Capturing modern metrics is part of what makes them so valuable; they can tell much more of a network participant’s story because much more of a network interaction is trackable in the digital world (i.e. location, daily active users, purchases patterns, and more). Such information provides a robust graph of the digital overlaid with physical world to provide insight to how the customer interacts with products, how they think (about products), where they go, what they do, how long they do it for, and when they do these things.

Moving Forward

Getting the measurement of the economy right is perhaps the biggest challenge, however, some great work has already been done by both governments and companies to find more accurate ways to account for change. Using metrics like gross domestic output, the Social Progress Index, the McKinsey Global Institute Connectedness Index would all be excellent compliments to the GDP.

Thinking back to the analogy of the economy acting like a computer system, a common change agent will continue to be startups – who reimagine markets and create new ones. Those startups will be responsible for adding layers to the technology stack. And just as the stack is always changing as new layers are added, so too will the mental flexibility for measuring them through the addition of new metrics to track global progress. That measurement agility needs to be similarly pervasive to the micro and managerial parts of the growing digital economy. This approach will modernise the way economies are measured and build a new foundation of knowledge for responding to conditions with more current levers; the trickle-down effect will be much more effective than exercising the use of dated levers designed for a different economy than the evolved one today. It will be an exciting day when the economy can be effectively measured in real-time.

With an orientation towards the future, President’s Obama’s White House sees future opportunities in two areas: robotics and digital communications. The big “so what” here is an acknowledgment that there is a skills gap to meet the requirements to participate in that emerging economy. This will require some sort of higher education (particularly STEM), on-the-job, or vocational training in order to enter.

The other way the US stays ahead is to offer the H-1B visa to bring in foreign talent to help propel the new markets forward (as there is a supply of capable individuals outside US borders). These things help continue the productivity growth line on its course moving upwards and to the right.

Additionally, there is a two-part need for the US to compete – an increase in federally funded R&D and increase in public-private partnerships as the delivery mechanism to merge capital with labour. Private funded R&D overtook federally funded R&D spend in 1980 (shown above) and the gap has widened steadily since. It would be helpful for society to realise some benefits of closing that gap.


R&D spend is only part of the equation, the other part has to do with the nation’s infrastructure. Infrastructure upgrades are becoming an imminent need across the nation and massive government spending in that domain seems on the verge. Returning to the analogy for the economy like a computer system, this is the time where there is an opportunity to develop infrastructure 2.0, which is already underway in some cities. And to do so, the US can leverage public-private partnerships to prepare the physical upgrades that need to occur (i.e. roads and bridges) with digital sensors which will enable improved monitoring of the nation’s frequently utilised assets.

Because some of this has never been done before, it is somewhat experimental (without significant risk, though) which could potentially justify some R&D spend by the government – potentially identifying certain areas of the country as special innovation zones, in essence, places to try things. However, the productivity gain from marrying the two economies – physical and digital – has a two-fold benefit.

The first is it solves a pressing issue of safety by upgrading deteriorating national infrastructure. And the second is it brings together labour participants from both economies (physical and digital) who have largely diverged in recent years as mentioned above, which helps each party have a deeper understanding of the challenges of maintenance and for the type of work needed for both aspects of such projects for years to come, thereby increasing the probability of a long-term positive return on investment.

Accounting for the economy correctly matters and for that, in both private and public markets, progression in the right direction is underway with changes for measuring software-as-a-service companies leading the way. Furthermore, the aforementioned assists with monitoring the financial health of a new business model, but more work can be done to evaluate companies differently more broadly. For that, companies can be reclassified and compared by business model instead of by sector. If not, perhaps there is a hybrid solution that might work.

For incumbent companies to get to the most profitable business model, network orchestrator, a cultural shift likely needs to occur. That shift has to do with using data for everything – people, process, and technology (innovation) – and also for policies about openness. Part of the success story for network orchestrators is the open application programming interfaces to their platforms which allow developers to build (often valuable) extensions from the platform which in turn drives more users to adopt.

That competitive advantage is measured with data and built on modularity – essentially work is saved in ways that others can build of it with little friction. A data strategy and culture of build-measure-learn with metrics and benchmarks to track progress positions companies to be more focused and to be faster – the most important competitive tenant to stay in business today.

Have your say. Sign up now to become an Author!

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend