How AI Is Reshaping Cyber Crime And Policing
How AI Is Reshaping Cyber Crime And Policing
07/28/2023 :: Jeremy Pickett :: Become a Patron :: Buy Me a Coffee (small tip) :: @jeremy_pickett :: Discussion (FB)
Interfacing with Law Enforcement can be a time consuming, confusing process. This long form article aims to dispel some of the hesitancy in reporting online cybercrime correctly, both in traditional sense and with the aid of an AI assistant.
TL;DR: Artificial intelligence presents immense opportunities alongside risks across the cyber landscape - for criminals and police alike. As AI diffuses into the fabric of finance, law, and data systems, it disrupts old paradigms of computer security offense and defense. Both sides now leverage automation, pattern recognition, predictive analytics, and other machine learning techniques to pursue their goals. The very nature of cyber risks, crimes, and policing shifts in this new era of intelligent algorithms, for better or worse.
New Frontiers in Automated Cyber Crime
First, AI lowers barriers for cyber lawbreaking by automating time-consuming manual processes. For instance, AI-powered hacking tools now circumvent human effort needed for activities like vulnerability probing, malware generation, phishing template creation, and credential stuffing. Offensive AI techniques like generative adversarial networks (GANs) can automatically analyze defenses and adapt attacks to bypass them. And intelligent bots relentlessly scale attacks that once required manual oversight.
In effect, AI acts as a digital steroid amplifying the capabilities of criminal hackers. Tools once exclusively in the arsenal of state-level threat actors now diffuse to cybercriminals, who reap efficiency gains similar to legal enterprises adopting AI. In many cases AI becomes the multiplier making criminal hacking campaigns profitable.
For example, AI banking malware now defeats manual fraud detection with techniques like injecting delays to mimic natural typing rhythms during illicit money transfers. AI programs automatically register accounts at scale for resale in the criminal underground. And AI chatbots engage in conversational phishing to socially engineer victims. According to threat researchers, AI-enhanced social engineering at scale presents one of the most dangerous emerging threats as natural language processing improves.
Across domains from email compromise to ransomware, automation powered by machine learning removes the drudgery from cyber crime while also improving effectiveness. In the process, AI changes criminal business models and dynamics much as it transforms legal industries. Ultimately automation shifts the balance between attackers and defenders by easing once-difficult hacking techniques and massively scaling others.
Trademark Search Engines
The USPTO's (United States Patent and Trademark Office) TESS database allows searching registered trademarks. Researchers can look for potential infringement of trademarks and brand names.
The EUIPO's (European Union Intellectual Property Office) eSearch Plus allows searching EUTMs (European Union Trade Marks) along with trademarks from other participating national offices.
Private search engines like TM Search from Thomson CompuMark aggregate trademark data from numerous national registries worldwide. However, official government databases like USPTO TESS provide the most legally authoritative results.
WIPO's Global Brand Database consolidates trademark data from multiple national trademark offices and commercial providers, enabling brand searches globally.
Google Patent Search allows text search of the full text of all published patent applications and issued patents from the USPTO and EPO. This can uncover patents related to brands and products.
Commercial providers like Corsearch offer brand protection services that combine monitoring trademarks, domains, web content, and other data sources for potential brand infringement.
Dark Web Analysis
General automated dark web scanning tools remain limited. However, researchers can manually search sites like hidden wikis, forums, illicit markets, and paste sites located via search engines like Ahmia, Torch, and OnionLand. This requires using the Tor browser for access.
When investigating a specific entity, checking for database breaches containing its data on dark web sites can yield insights. However, manually verifying breaches proves difficult.
For monitoring discussions involving a company, brand, or individual's name, setting up crawl alerts on dark web sites via services like Recorded Future or ZeroFox is possible. But coverage gaps persist.
Linked Library Analysis
For compiled binary programs, examining linked dynamic libraries and other dependencies can uncover behaviors and capabilities. Tools like ldd on Linux enumerate required libraries, while strings extracts human-readable strings from binary files for further analysis.
On Windows, utilities like Dumpbin, Dependency Walker, and PE Explorer serve similar purposes. But hands-on behavioral analysis in sandboxes often works better to profile dependencies in action.
Monitoring network traffic for connections to external servers can also identify linked communications and dependencies that are not present as local libraries. This may reveal phoning home or command and control channels.
Binary diffing tools compare versions of compiled programs to identify changes and new capabilities added over time. Differences in library dependencies and imports often signal new or altered functionality.
Fuzzing tools feed invalid, unexpected, or random data into software programs and APIs to analyze failure modes and potentially uncover vulnerabilities or backdoors related to linked code dependencies.
Reverse engineering using disassemblers like IDA Pro allows auditing proprietary linked code that may be closed source or intentionally obfuscated to hide vulnerabilities or illegal capabilities.
DMCA Notice Lookup
The Lumen database aggregates DMCA takedown notices sent to internet platforms regarding alleged copyright infringement. It provides transparency into copyright claims and removals.
Additionally, platforms like Google, Twitter, Facebook and Reddit publish their own transparency reports on specific DMCA notices received and compliance rates. However, most only report high-level statistics, not granular details.
For filing new DMCA notices, the U.S. Copyright Office provides standardized online forms. Each online platform also maintains their own DMCA reporting procedures that rightsholders must follow.
Chilling Effects analyzes and catalogs DMCA takedown notices, providing critical insights into how the opaque notice and takedown system is used.
Relying solely on self-reported notices and statistics from platforms provides an incomplete picture. Independent research and reporting illuminates blindspots.
Abusive DMCA claims also frequently go unchallenged given the burden of contesting notices. Targets require legal resources for counter-notices disputing unwarranted accusations of infringement.
Social Media Profiles
Nearly every major social media platform like Facebook, Instagram, Twitter, YouTube, Reddit, etc. provides search capabilities to look up users' public posts and profiles. These can reveal affiliations, opinions, images, aliases, and other intelligence.
Aggregator sites like Social Searcher perform cross-platform searches but miss non-public posts. Paid tools like Sow Reach go deeper by aggregating public data combined with gray area breach data.
However, terms of service generally prohibit scraping user data en masse. Manual examination of cross-referenced profiles provides richer context safely within agreements.
Social media forensic tools like CacheBack allow historical analysis of profiles, revealing deleted or edited posts that may provide additional signals.
Law enforcement can request private user data from social media companies via formal legal processes. But terms of service restrict such access for civilian researchers.
Advanced threat groups have developed exploit-based techniques to exfiltrate non-public social media data at scale, though this remains ethically and legally prohibited for commercial use.
Legal Records Lookup
Sex offender registries for the U.S. are consolidated at NSOPW. But each state maintains its own more detailed portal as the authoritative source, such as California's Megan's Law site.
For a fee, commercial background check services like GoodHire, Checkr and Intelius provide aggregated criminal record lookups incorporating certain state, local, and federal databases via authorized channels. Some states also provide official paid services.
However, individuals' best self-option remains obtaining their Identity History Summary from the FBI directly via authorized channels like the channeler IdHire. This provides the most accurate federal criminal data picture.
Police departments and courts maintain additional local criminal records not aggregated into commercial databases. But accessing these requires formal information requests subject to legal restrictions.
Non-profit organizations like the National Consumer Law Center fight for reforms and policies upholding individuals' access to inspect their own background reports and resolve any inaccurate or unconstitutional data.
Housing, employment, insurance and other sectors using background checks must comply with the Fair Credit Reporting Act and not use unreliable data sources that could propagate inaccuracies or civil rights violations.
Business Information Lookup
Each Secretary of State website allows searching business entity registration details for corporations, LLCs, partnerships, etc. registered in that state. This reveals authorized signers, startup date, business type, and other official data.
Resources like GuideStar detail U.S. nonprofits' IRS filings and regulatory documents, while tools like Pitchbook and Owler provide private company profiling beyond what's in official state filings.
For deep dives into global public companies, financial filings from SEC EDGAR combined with business profiles from Bloomberg provide rich detail on firms' operations, financials, leadership, and more.
Freedom of Information Act (FOIA) requests can potentially access additional business records held by federal or state regulatory agencies, though exemptions often apply for confidential data.
News monitoring offers additional context, with tools like LexisNexis aggregating global media sources for company/executive coverage and controversy monitoring.
Discrete inquiries with current or former employees, business partners, and vendors can provide candid insider perspectives, within the bounds of legal constraints against corporate espionage.
Wven the use of privacy coins sends a signal. Authorities and observers might regard frequent privacy coin usage with increased suspicion, given their association with illicit activity. Even within networks like Monero, patterns or 'metadata' might offer clues to investigators. In fact, the very act of converting privacy coin to more 'mainstream' cryptocurrencies or fiat can expose users, as these transactions often occur on regulated exchanges with know-your-customer protocols.
New Defenses With AI Monitoring and Analytics
In response, cyber defenders also increasingly adopt AI techniques to counter the rising computational power of adversaries. AI-powered defenses leverage capabilities including pattern recognition from massive threat data sets, analysis automation, and predictive network security models. AI systems monitor network traffic and endpoint activity for subtle indicators of compromise missed by legacy rules-based defenses.
Threat intelligence aggregated algorithmically across vendor and open sources reveals connections between emerging and known attack patterns. By processing billions of signals and adapting based on updated context, AI threat detection aims to keep pace with attackers' shape-shifting tradecraft. Though challenging to implement well, machine learning models incrementally improve at flagging suspicious anomalies and threats previously unidentified.
AI also unlocks new frontiers in network analysis and forensics post-breach. By mining mundane artifacts, algorithms uncover attacker activity hiding within mountains of log and traffic data. Analysts then leverage threat-hunting platforms exposing anomalies via visualizations, statistical summaries, and natural language queries of surging case data. Combined with AI accelerating data ingestion, such solutions surface needles in immense cybersecurity haystacks.
According to experts, AI's pattern extrapolation capabilities will enhance attribution by identifying behavioral and technical indicators linked to known threat groups. This mirrors efforts profiling criminal suspects based on modus operandi. However, for now strategic hackers still foil behavioral AI systems through misdirection, false flags, and innovation - much as intelligent human adversaries seek to evade psychological profiling. But algorithms continue incrementally learning.
On the defensive side, AI also automates tasks from network monitoring to vulnerability management using techniques like robotic process automation. Learning models further adapt security configurations to actual asset risk profiles, boosting efficiency alongside efficacy. Though AI cannot replace analyst expertise and intuition, it acts as a productivity multiplier making smart security teams more potent on par with adversaries.
Links Specifically to Scam and Fraud Reports
https://hackertarget.com/scan-membership/ (need to purchase monthly license)
https://socradar.io/the-ultimate-list-of-free-and-open-source-threat-intelligence-feeds/
Emerging Legal Challenges With AI and Liability
However, the spread of AI also introduces myriad emerging legal issues to cybersecurity. Application of laws governing areas like fraud, IP, privacy, and contracts grows complex in the context of software agents operating autonomously. Questions of liability quickly multiply with automation.
For instance, how does criminal law treat an AI bot versus the hacker that deployed it? Standards for "intent" blur when algorithms exceed programmers' expectations through emergent behavior. Information security laws also rely on thresholds of "unauthorized" access that prove slippery when applied to AI tools gathering data through technical means of circumvention.
Civil liability raises thorny questions as well. Should manufacturers bear responsibility when flaws in learning algorithms or neural networks enable criminal hacking or privacy invasions? How does liability extend through supply chains of aggregated data used for AI training? Do companies possess duties to monitor and audit functionality of AI protections sold commercially?
Precedents around liability for physical world autonomous systems like surgical robots will likely inform evolving cyber-AI case law. But the tendency towards opaque algorithms protected as trade secrets undermines accountability necessary in public legal systems. Resolving what level of human culpability remains for computer-generated actions demands rethinking antiquated statutes in the face of AI's disruptive powers.
AI Arms Race in Information Security
In effect, the emergent spectrum of AI cyber capabilities accelerates what experts describe as an unfolding "infosec arms race". Powerful AI hacking tools democratize advanced threats for criminals, requiring defenders to adopt intelligent systems in turn to analyze evolving dangers in complex interconnected networks. This self-reinforcing cycle quickly accelerates, as each side leverages automation against the other's own.
Over time, analysts warn this competition could simply render AI systems merely "table stakes" in cybersecurity. Much as anti-malware became a basic requirement, intelligent algorithms may eventually constitute the baseline price of admission to information security. However, outcomes remain contingent on how governance and incentives evolve alongside the technology itself.
For now AI insecurity poses risks while also expanding the solution space. But difficult policy choices loom around issues like encryption, law enforcement demands for backdoors, transparency, human oversight over AI systems impacting civil liberties, and how to balance individual privacy with national security imperatives. As with past military technologies, pursuing cyber stability obligates wisdom alongside ingenuity.
Emerging Models for AI and Law Enforcement
Beyond commercial security, governments also grapple with tradeoffs integrating artificial intelligence into policing. In domains from surveillance to crime prediction, critics worry unconstrained AI deployment risks threatens civil liberties without appropriate oversight
For instance, some allege opaque government social media monitoring tools violate free speech when used for predictive policing or immigration vetting based on complex algorithmic inferences. Controversial facial recognition integration across law enforcement networks also allows suspect tracking at unprecedented scope and scale concerning privacy advocates.
But equally concerning, researchers find machine learning crime prediction models often encode biased data reflecting distorted enforcement. This risks reinforcing discriminatory over-policing based on ethnicity, income, mental health history and other factors with little transparency or accountability. Establishing guardrails against such dangers remains deeply challenging.
In response, experts propose policies and technical standards preventing unethical uses of law enforcement AI. Suggestions include banning certain application domains, requiring warrants for algorithmic searches, systems enabling due process, formalizing transparency requirements, necessitating explainability of model behaviors affecting individual rights, and instituting oversight boards auditing for potential biases or overreach.
However, realization hinges on inclusive democratic processes achieving prudent balances between public safety needs and civil liberties protections. With extensive harms possible from unfettered imposition of black box systems, very careful governance alongside technology development remains imperative but complex for free societies. The risks of tomorrow obligate wisdom today.
Cyber Social Justice in the Age of Intelligent Algorithms
Beneath these security issues, the fusion of policing and AI reveals deeper social divides. As machine learning enters law enforcement, critics argue its deployment often disproportionately surveils and targets marginalized communities experiencing limited recourse.
For instance, low income neighborhoods see disproportionate densities of camera networks enabling training of facial recognition that performs poorly on minority populations. And predictive policing systems claiming objectivity often encode racially skewed enforcement data or make spurious inferences correlating neighborhoods and race with unreported crime rates.
However, reform advocates counter that used equitably, AI could also improve accountability around profiling and abuse as seen with the rapid rise of police body cameras. The technology itself does not ordain outcomes. But design choices and governance profoundly influence impacts. Ethics obligations demand centering algorithmic justice to prevent exclusions or harms to vulnerable groups.
Experts propose reducing bias through techniques like excluding protected class data, testing models for disparate impacts, and insisting on diverse design teams. But holistic reform also requires embracing public participation and transparency guiding the technology's evolution. With wise policy and implementation, AI may yet fulfill its promise enhancing justice.
Protecting Innovation in an Age of Artificial Intelligence
As AI transforms cyber risk scenarios, it also disrupts calculations around online intellectual property protections and infringement. The generative power of algorithms challenges legal definitions of original creative works and ownership bounds in virtual and physical domains.
For instance, copyright law struggles with assessing robo-authored articles and artworks produced using language or image models like GPT-3 and DALL-E. Judges untrained in machine learning struggle to distinguish transformative fair use from derivative products of automated systems. In the process, applying dated IP law to AI outputs grows increasingly complex.
However, the nature of creativity and innovation also shifts in the algorithmic era. Training datasets and neural network parameters constitute immense capital investments for companies. But overly strong protections also risk monopolization of knowledge itself. And AI infinitely multiplies generative abilities, albeit within parameters set by humans.
Reconciling protections incentivizing development while encouraging access and creativity constitutes an ongoing puzzle. Flexible fair use carve-outs may require expansion to foster innovation. Information wants to be free, but companies want it proprietary. Balance remains key, but difficult in rapidly evolving tech ecosystems.
The Role of Cyber Ethics in an Intelligent Future
Ultimately, artificial intelligence constitutes not a problem to solve, but a power to channel responsibly. As with any transformative technology diffusing rapidly, risks accompany benefits. However, outcomes depend profoundly on social will and wisdom guiding the tools we unleash, not technical ingenuity alone.
With AI, key ethical obligations around justice, transparency, accountability, non-discrimination, and human dignity become heightened given the technology's scale and complexity. But governance balancing rights and public safety also grows immensely more complicated. There are no easy answers in the intelligent algorithm era.
Yet challenges demand action, not retreat. With ethical application, AI could enhance protection of rights even amidst proliferating threats in digital realms. Technology is but a magnifying mirror for humanity's wisdom or folly. Conscience calls innovators, companies, governments and citizens alike to walk the high road together - embracing AI's gifts while restraining its hazards with vision and moral courage. The future remains ours to discover.
References
1. Balbix. "Using Artificial Intelligence in Cybersecurity."[1]
2. Computer Society. "The AI-Cybersecurity Nexus: The Good and the Evil."[2]
3. Terranova Security. "AI in Cyber Security: Pros and Cons."[3]
4. Booz Allen Hamilton. "The Role of Artificial Intelligence in Cybersecurity."[4]
5. CXOToday. "The Future of Cybersecurity: How is AI Revolutionizing the Battle Against Cyber Threats?"[5]
6. Council of Europe. "Respecting Human Rights and the Rule of Law When Using Automated Technology to Detect Online Child Sexual Exploitation and Abuse."[6]
7. Europol. "Internet Organised Crime Threat Assessment (IOCTA) 2020."
9. Royal United Services Institute (RUSI). "The UK's Response to Cyber Fraud: A Strategic Vision."
10. Perplexity - AI Companion. Google Chrome extension that allows users to ask Perplexity questions and receive quick answers with cited sources.
These references provide insights into the role of AI in cybercrime and policing, its impact on cybersecurity, and the challenges and opportunities it presents. They can support your article by providing expert opinions and analysis on the topic.
Citations:
[1] https://www.balbix.com/insights/artificial-intelligence-in-cybersecurity/
[2] https://www.computer.org/csdl/magazine/it/2022/05/09967400/1IIYBEMIaoE
[3] https://terranovasecurity.com/ai-in-cyber-security/
[6] https://rm.coe.int/respecting-human-rights-and-the-rule-of-law-when-using-automated-techn/1680a2f5ee
References provided by Perplexity.ai