When California passed the Transparency in Supply Chains Act back in 2010—though it only took effect in 2012—it was, in many ways, ahead of its time. The legislation compelled large retailers and manufacturers doing business in the state (those with over $100 million in annual revenues) to disclose their efforts to eradicate slavery and human trafficking from their direct supply chains. For over a decade, firms published statements—often high-level—describing supplier codes of conduct, audits, and training programs. There has been criticism, of course—some well-founded—that compliance became more about checking boxes than achieving meaningful change. But that may be shifting. Not because of any change in the law itself, but because the tools available for meeting its intent are becoming far more sophisticated. And that raises, perhaps, as many challenges as opportunities.

Consider Californian apparel companies, which remain at the sharp end of this regulatory expectation. Traditionally, many of these firms relied on supplier self-reporting and third-party audits—both valuable but often limited in reach, especially where Tier 2 and Tier 3 suppliers are concerned. Today, a growing number are exploring the use of AI-driven open-data platforms to go further. These platforms—still evolving, still imperfect—enable companies to scan a wide range of public sources for signals of supplier misconduct. This might include court records, media reports, NGO alerts, or even social media posts about working conditions. The promise is that firms can identify potential supplier grievances earlier, perhaps even before they escalate into serious legal or reputational risks. The reality, however, is messier. Data is noisy. Grievances may be hard to verify or contextualize. And there is a genuine risk of over-reliance on tools that, while powerful, are no substitute for human judgment.

So how does one begin to integrate these technologies into annual compliance reporting in a way that aligns with the spirit of the California Act rather than merely layering on another technical process? There is no single formula, of course, but a few steps are emerging as common practice—or at least as reasonable starting points. First, procurement and compliance teams may work with technology specialists to select an appropriate open-data platform. The key is ensuring the tool can ingest multiple data types and sources, with a reasonable level of transparency around how its algorithms prioritize or weight different kinds of input. That is not always easy to assess, and some teams have found themselves revisiting the procurement stage after discovering, too late, that their chosen platform lacked the necessary flexibility.

Next, teams typically need to design a data-ingestion protocol. In practice, that means defining which sources to monitor and how frequently to update the data. Some firms opt for quarterly sweeps, others for monthly or even continuous monitoring, depending on risk appetite and resources. It is not just about collecting the data, though. The real work begins in filtering, categorizing, and ultimately interpreting it. False positives are common. Context can be thin. A grievance flagged by an algorithm may, on closer examination, turn out to be based on outdated or misreported information. Here, the value of human review—painstaking as it can be—remains critical.

Once data is gathered and reviewed, the task turns to integration: specifically, how to embed these insights into annual compliance reports in a way that is both transparent and defensible. One approach some Californian apparel firms have adopted is to include a dedicated section in their statements outlining how open-data findings have informed risk assessments or remediation actions. Others have gone further, publishing anonymized summaries of grievances identified and steps taken in response. The right approach often depends on company culture, stakeholder expectations, and legal advice.

It is worth pausing to note that integrating AI-driven tools into compliance reporting is not merely a technical exercise. It raises broader questions about governance, accountability, and ethics. For instance, how should firms handle situations where open data points to potential issues that are difficult to verify? What obligations arise when a grievance is surfaced by a machine rather than a human source? There is no consensus yet—only an evolving set of practices shaped by experimentation, caution, and, at times, unease about getting it wrong.

There is also the challenge of ensuring these tools do not create a false sense of security. It can be tempting to assume that because a platform has scanned thousands of data points, a supply chain is free of serious issues. Yet the absence of a flag does not necessarily mean the absence of risk. In fact, some of the most serious abuses are the hardest to detect—hidden deep in sub-tiers or obscured by complex ownership structures. Technology helps, undoubtedly, but it is no silver bullet.

What is becoming clear is that modern data tools can support more rigorous compliance with the California Transparency in Supply Chains Act—but they do not replace the need for careful human oversight. They can enhance visibility, certainly, but they also introduce new complexities and demand new skills. Compliance teams that engage with these tools thoughtfully are likely to be better prepared for regulatory scrutiny and stakeholder questions. Exactly how these practices will evolve, and what best-in-class will look like in the years ahead, remains—at least for now—an open question.