
When tracing the emergence of remote health monitoring technologies in 2003, it becomes clear that the very language of the sector was still in flux. The terminology we use today—digital health, telemonitoring, connected devices—simply didn’t have widespread currency yet. ISIC 8621, the code for general medical practice activities, is the best statistical foothold available, but also a blunt one. Within its boundaries, early adopters of health tech mingle with traditional clinics, making the task of identification a nuanced, sometimes exasperating process.
The search begins, as it so often does, with the raw list: all firms registered under ISIC 8621 for the year. At a national or sub-national level, this includes a vast majority of conventional practices—solo offices, group clinics, outpatient centers—alongside a much smaller population exploring remote monitoring. Discerning the latter group requires supplementary filtering. Trade directories, archived health technology conference proceedings, and business registries sometimes flag practices or organizations that trialed remote monitoring devices, even if only in a limited way. Occasionally, industry associations or local health authorities compiled lists of pilot programs, which, with luck, can be cross-referenced to ISIC-coded entities.
Descriptions and records can be slippery. Many early remote monitoring initiatives operated as research projects embedded in larger clinics, or as partnerships between device makers and established care providers. The corporate name on the registry may not hint at any tech involvement at all. In these cases, project announcements, local news stories, and sometimes even patent filings become useful, if indirect, evidence. A practice cited in a clinical trial for home spirometry, or one that received a small grant for diabetes telemonitoring, belongs in the cohort, even if its business classification says otherwise.
The next challenge is measurement. Once a set of early health tech users has been identified, the focus shifts to patient-device adoption rates. Public reporting is inconsistent at best. Larger clinics might include device uptake in annual reviews or project summaries, while smaller ones may only hint at adoption through published research or partner communications. Surveys and interviews conducted at the time—especially those sponsored by device manufacturers—can offer ballpark figures, although biases abound. Sometimes, insurers or health agencies tracked device provision under disease management schemes, producing data tables that have survived in one form or another.
To merge adoption rates with firm-level revenues, several routes are available, each with its own flaws. Some early adopters, especially if they grew out of startup culture rather than traditional care, broke out digital health service lines in their financial statements. For the majority, device revenue was subsumed under broader patient care or service categories.
Where data is available, mapping year-on-year changes in revenue to reported device uptake gives a crude but useful proxy for commercial traction. For publicly traded companies, or those that published investor updates, the connection is occasionally explicit: “X% of our revenue in 2003 derived from remote patient monitoring,” or “adoption of telemetric devices drove a Y% increase in service revenue.” These are rare but valuable touchpoints.
In the absence of granular disclosure, secondary signals become critical. Patterns such as new hiring for tech-support roles, expansion into device logistics, or visible investments in training staff on remote monitoring platforms can mark out genuine engagement with health tech. Likewise, partnerships with device manufacturers or participation in multi-center research studies often imply a material stake in patient-device adoption, even if the financial numbers remain opaque.
Throughout this process, documentation is vital. Every assumption—about what qualifies as “remote monitoring,” how device adoption is measured, which revenue streams count—should be logged. This is less about defending the data than about giving future analysts a clear trail to follow or challenge. Health technology, particularly in its early days, lived at the boundaries of what classification systems could capture. Decisions about inclusion and exclusion have lasting effects, even when made in the spirit of best judgment.
What emerges from such an analysis is a messy but illuminating picture. Uptake rates, revenue impacts, and firm trajectories are rarely linear or even parallel. Some practices that pioneered remote monitoring faded away as the market consolidated. Others, more cautious, stepped in later and flourished as the technology matured. The interplay between device adoption and commercial performance hints at a landscape in motion—one where statistics capture only a portion of the transformation taking place on the ground.