Data Scoring as a Service CSAM text and link detection
by Vistalworks
Risk scoring data service detects patterns in datasets associated with CSAM text and link sharing
This has serious reputation and legal implications for organisations who do not identify and tackle CSAM link sharing on their digital services. New regulations such as the UK Online Safety Act and the EU Digital Services Act (DSA) have introduced fines of 6%-10% of global annual revenues for platforms who do not effectively protect the public from harmful and illegal content.
Vistalworks has spent the last year working on the UK Government’s Safety Tech Challenge to develop technology disrupt the sharing of text links to CSAM, in association with enforcement and intelligence agencies such as GCHQ and the Home Office. We’ve identified new and rapidly evolving ways that offenders are exploiting the open web to increase their reach, pulling new vulnerable people into criminality and exposing mainstream platforms and digital service providers to serious legal risk.
Vistalworks’ solution detects and risk-profiles non-image based indicators of CSAM distribution in client datasets. The subscription service is contextual, continually updated, and informed by expert behavioural research and specialist law enforcement input.
Our innovative solution, Data Scoring As A Service for CSAM text and text link detection, is particularly relevant to:
Public sector, including online safety regulators and specialist enforcement
NGOs and specialist eco-system service providers with a CSAM specific remit
Search engines, platforms, marketplaces and similar online services indexing and/or storing text and/or serving auto-generated prompts and recommendations
Discussion forums, social media platforms, communities, and similar web-publishing services with a text component
Platforms, digital services, marketplaces and online communities whose end-users are vulnerable to targeting by offenders associated with child exploitation and CSAM.
Key Features
Reduces the risk of inadvertently publishing and distributing illegal material by detecting evasive and evolving characteristics of CSAM text and link sharing in client datasets
Contextual to reduce false positives, with algorithms continually updated and adaptive to mitigate offender responses to removal
Informed and updated by expert behavioural researchers and offender profiling, with specialist legal and law enforcement input
Accurate in detecting high-risk and context dependent terms, phrases, behaviours and links in small, sparse and large datasets - including search engine indexes, chat and comment threads, marketplace listings, and generative AI outputs
Available as secure bulk upload/download service, allowing data owners to securely transfer CSV files (or similar) through Azure Cloud for rapid automated scoring by Vistalworks
API and custom case management systems integrations also available
Full out-sourcing of end-to-end process available, with consultancy style findings-only reporting if required
The underpinning risk-analysis models retain a ‘human in the loop’ and use direct input from subject matter specialists. This means the service is not classified as an AI System under EU Regulations and is therefore eligible for use by the public sector in an enforcement and investigation capacity.
This is a text and text links analysis service (so does not involve the viewing or transfer of high-risk images). But reports do contain upsetting data related to extreme criminal activity and should be handled in accordance with local laws, security and staff well-being best practices. Vistalworks can help if you do not have the internal expertise or processes to manage this.