Efforts to spur the growth of small and medium-sized enterprises (SMEs) are ubiquitous in economic policy. Whether through subsidized lending, tax incentives, or direct grants, governments and development agencies routinely invest in programs designed to accelerate SME performance, hoping that broader economic dynamism will follow. Yet, evaluating whether these efforts truly make a difference requires discipline, clarity, and a structured approach to measurement. Here, the International Standard Industrial Classification (ISIC) system is an underappreciated asset.

 

ISIC codes provide a way to classify business activities in a consistent and internationally comparable fashion. For those interested in retail and food services—two sectors with a high density of SMEs—ISIC 47 (Retail trade) and ISIC 56 (Food and beverage service activities) allow analysts to track sector-specific trends with much more precision than aggregate statistics alone.

 

The first step is to assemble a baseline. Using national business registers or administrative data, identify all SMEs coded under ISIC 47 and 56. It is important to define “SME” consistently, whether by employee count, turnover, or national standards. With this universe established, analysts can observe sectoral output—sales, employment, new firm formation, or another relevant measure—over a period that encompasses both the introduction and the rollout of the loan or incentive program in question.

 

Tracking changes in output alone, however, tells only part of the story. The core challenge is to determine whether observed improvements are a result of the incentive, or simply reflect broader economic trends. For this, difference-in-differences (DiD) analysis is a valuable tool. The logic is simple: compare the change in sectoral output among SMEs exposed to the loan program with the change among a control group of similar SMEs who were not eligible for, or did not receive, the loans.

 

Constructing this analysis requires careful attention to group definition. The treatment group consists of SMEs under ISIC 47 or 56 that received loans; the control group is comprised of otherwise similar SMEs—matched on firm size, location, and pre-program growth rates—who did not. Data permitting, one can further stratify by sub-sector or geographic region to increase robustness.

 

The analytical process typically involves several steps. First, gather output data for both groups before and after the intervention period. Next, calculate the average change in output for each group over the relevant timeframe. The difference between these two changes is the DiD estimator: a measure of the average treatment effect of the loan program, net of general sectoral or macroeconomic fluctuations.

 

It’s rarely straightforward. Loan programs are often rolled out in phases, with eligibility criteria shifting or multiple incentives overlapping. SMEs may self-select into programs based on unobserved characteristics—growth orientation, managerial skill, or risk tolerance—that also affect their performance. These issues call for robustness checks: placebo tests using pre-intervention periods, sensitivity analysis with alternative control groups, or, where possible, the inclusion of additional covariates to control for observable firm characteristics.

 

One of the virtues of ISIC-coded analysis is its transparency and replicability. Results can be broken down by detailed sub-sector, making it easier to identify where incentives are most or least effective. Is the program boosting growth primarily in grocery retail, or are gains concentrated among specialized food service providers? Are urban SMEs benefiting more than their rural counterparts? Disaggregation by ISIC code helps policymakers refine future interventions and better target their resources.

 

It is also worth looking beyond the headline findings. Sometimes the effect of a program is indirect—strengthening supply chains, encouraging new market entry, or driving improvements in business practices that are not immediately reflected in output data. Qualitative interviews, case studies, and ongoing monitoring can enrich the statistical analysis and reveal dynamics that numbers alone may miss.

 

Limitations should be acknowledged. Data quality, especially among small firms, can be uneven. Not all SMEs are consistently or accurately coded, and informal businesses may escape the registry altogether. Nonetheless, the discipline of ISIC-based analysis, coupled with robust econometric techniques like difference-in-differences, brings much-needed rigor to the perennial challenge of program evaluation.

 

In a policy landscape crowded with claims and counterclaims, such rigor is invaluable. By using ISIC codes to structure, disaggregate, and compare, analysts can provide a clearer view of what SME incentives actually achieve—and where they might fall short. This, in turn, supports better decisions, more accountable spending, and, hopefully, more resilient SME sectors in the years to come.