MarketAlert โ€“ Real-Time Market & Crypto News, Analysis & AlertsMarketAlert โ€“ Real-Time Market & Crypto News, Analysis & Alerts
Font ResizerAa
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Reading: Adversarial AI Digestโ€Š — โ€ŠJune, 2025
Share
Font ResizerAa
MarketAlert โ€“ Real-Time Market & Crypto News, Analysis & AlertsMarketAlert โ€“ Real-Time Market & Crypto News, Analysis & Alerts
Search
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Have an existing account? Sign In
Follow US
ยฉ Market Alert News. All Rights Reserved.
  • bitcoinBitcoin(BTC)$66,034.00-1.70%
  • ethereumEthereum(ETH)$1,926.95-2.39%
  • tetherTether(USDT)$1.000.00%
  • rippleXRP(XRP)$1.35-2.36%
  • binancecoinBNB(BNB)$595.00-3.44%
  • usd-coinUSDC(USDC)$1.000.00%
  • solanaSolana(SOL)$77.95-3.40%
  • tronTRON(TRX)$0.277913-0.32%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.04-0.38%
  • dogecoinDogecoin(DOGE)$0.091890-0.75%
Blockchain

Adversarial AI Digestโ€Š — โ€ŠJune, 2025

Last updated: June 25, 2025 2:05 pm
Published: 8 months ago
Share

๐Ÿ“Œ Mapping: MAESTRO Threats — MITRE D3FEND Techniques — This website presents an interactive exploration of the intersection between two pivotal cybersecurity frameworks: MAESTRO and MITRE D3FEND. It aims to provide cybersecurity professionals with actionable insights into securing Agentic AI systems by mapping identified threats to corresponding defensive techniques, by Edward Lee. https://edward-playground.github.io/maestro-d3fend-mapping/

๐Ÿ“Œ 10 Key Risks of Shadow AI — A practical breakdown of Shadow AI: how unmanaged AI use — including tools, models, and features — creates hidden risks across security, compliance, data, and governance. https://www.linkedin.com/pulse/10-key-risks-shadow-ai-tal-eliyahu-9aopc/

๐Ÿ“Œ How an AI Agent Vulnerability in LangSmith Could Lead to Stolen API Keys and Hijacked LLM Responses — by Sasi Levi and Gal Moyal, Noma Security — https://noma.security/blog/how-an-ai-agent-vulnerability-in-langsmith-could-lead-to-stolen-api-keys-and-hijacked-llm-responses/

๐Ÿ“Œ Explore the latest threats to Model Context Protocol (MCP) — covering issues from prompt injection to agent hijacking — in this digest collected by Adversa AI. https://adversa.ai/blog/mcp-security-digest-june-2025/

๐Ÿ“Œ GenAI Guardrails: Implementation & Best Practices — Lasso outlines how organizations are designing and deploying guardrails for generative AI — including challenges, frameworks, and real-world examples. https://www.lasso.security/blog/genai-guardrails

๐Ÿ“Œ Trend Micro’s “Unveiling AI Agent Vulnerabilities” 4-part series explores key security threats in agentic AI systems — including Part I: Introduction, Part II: Code Execution, Part III: Data Exfiltration, and Part IV: Database Access.

๐Ÿ“Œ Malicious AI Models Undermine Software Supply-Chain Security — https://cacm.acm.org/research/malicious-ai-models-undermine-software-supply-chain-security/

๐Ÿ“Œ Leaking Secrets in the Age of AI — Shay Berkovich and Rami McCarthy scanned public repos and found widespread AI-related secret leaks — driven by notebooks, hardcoded configs, and gaps in today’s secret scanning tools. https://www.wiz.io/blog/leaking-ai-secrets-in-public-code

๐Ÿ“Œ What is AI Assets Sprawl? Causes, Risks, and Control Strategies — Dor Sarig from Pillar Security explores how unmanaged AI models, prompts, and tools accumulate across enterprises — creating security, compliance, and visibility challenges without proper controls. https://www.pillar.security/blog/what-is-ai-assets-sprawl-causes-risks-and-control-strategies

๐Ÿ“Œ Is your AI safe? Threat analysis of MCP — Nil Ashkenazi outlines how Model Context Protocol (MCP) introduces risks like tool misuse, prompt-based exfiltration, and unsafe server chaining. Focus is on real-world attack paths and how insecure integrations can be exploited. https://www.cyberark.com/resources/threat-research-blog/is-your-ai-safe-threat-analysis-of-mcp-model-context-protocol

๐Ÿ“Œ A New Identity Framework for AI Agents by Omar Santos — We are all experiencing the rapid proliferation of autonomous AI agents and Multi-Agent Systems (MAS). These are no longer AI chatbots and assistants; they are increasingly self-directed entities capable of making decisions, performing actions, and interacting with critical systems at unprecedented scales. We need to perform fundamental re-evaluations of how identities are managed and access is controlled for these AI agents. https://community.cisco.com/t5/security-blogs/a-new-identity-framework-for-ai-agents/ba-p/5294337

๐Ÿ“Œ Hunting Deserialization Vulnerabilities With Claude — TrustedSec exploring how to find zero-days in .NET assemblies using Model Context Protocol (MCP). https://trustedsec.com/blog/hunting-deserialization-vulnerabilities-with-claude

๐Ÿ“Œ Uncovering Nytheon AI — Vitaly Simonovich ๐Ÿ‡ฎ๐Ÿ‡ฑ from Cato Networks, analyzes Nytheon AI, a Tor-based GenAI platform built from jailbroken open-source models (Llama 3.2, Gemma, Qwen2), offering code generation, multilingual chat, image parsing, and API access — wrapped in a modern SaaS-style interface. https://www.catonetworks.com/blog/cato-ctrl-nytheon-ai-a-new-platform-of-uncensored-llms/

๐Ÿ“Œ Touchpoints Between AI and Non-Human Identities — Tal Skverer from Astrix Security and Ophir Oren ๐Ÿ‡ฎ๐Ÿ‡ฑ from Bayer examine how AI agents rely on non-human identities (NHIs) — API keys, service accounts, OAuth apps — to operate across platforms. Unlike traditional automation, these agents request dynamic access, mimic users, and often require multiple NHIs per task, creating complex, opaque identity chains. https://astrix.security/learn/blog/astrix-research-presents-touchpoints-between-ai-and-non-human-identities/

๐Ÿ“Œ Breaking down ‘EchoLeak’, Vulnerability Enabling Data Exfiltration from Microsoft 365 Copilot — Itay Ravia and other members of Aim Security identified a vulnerability in Microsoft 365 Copilot where specially crafted emails can trigger data leakage through prompt injection and markdown/CSP bypasses. The issue stems from how Copilot processes untrusted input, potentially exposing internal content. https://www.aim.security/lp/aim-labs-echoleak-blogpost

๐Ÿ“Œ Remote Prompt Injection in GitLab Duo Leads to Source Code Theft — Legit Security’s Omer Mayraz demonstrates how a single hidden comment could trigger GitLab Duo (Claude-powered) to leak private source code, suggest malicious packages, and exfiltrate zero-days. The exploit chain combines prompt injection, invisible text, markdown-to-HTML rendering abuse, and access to sensitive content — showcasing the real-world risks of deeply integrated AI agents in developer workflows. https://www.legitsecurity.com/blog/remote-prompt-injection-in-gitlab-duo

๐Ÿ“Œ ISO/IEC 42005:2025 has been formally published. ISO 42005 provides guidance for organizations conducting AI system impact assessments. Establishing a process and performing an AI system impact assessment is integral for organizations looking to pursue ISO 42001 certification. More importantly, the AI impact assessment allows organizations to identify high risk AI systems and determine any potential impact to individuals, groups, or societies as it relates to fairness, safety, and transparency. https://www.ethos-ai.org/p/ai-impact-checklist or https://www.linkedin.com/posts/noureddine-kanzari-a852a6181_iso-42005-the-standard-of-the-future-activity-7334498579710943233-ts6S

๐Ÿ“Œ Checklist for LLM Compliance in Government — Deploying AI in government? Compliance isn’t optional. Missteps can lead to fines reaching $38.5M under global regulations like the EU AI Act — or worse, erode public trust. This checklist ensures your government agency avoids pitfalls and meets ethical standards while deploying large language models (LLMs). https://www.newline.co/@zaoyang/checklist-for-llm-compliance-in-government–1bf1bfd0

๐Ÿ“Œ How I used o3 to find CVE-2025-37899, a remote zeroday vulnerability in the Linux kernel’s SMB implementation by Sean Heelan. https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-2025-37899-a-remote-zeroday-vulnerability-in-the-linux-kernels-smb-implementation/

๐Ÿ“˜ Confidential AI Inference Systems — Anthropic and Pattern Labs are exploring confidential inference — an approach for running AI models on sensitive data without exposing it to infrastructure operators or cloud providers. In a typical AI deployment, three parties are involved: the model owner, the user providing the data, and the cloud provider hosting the service. Without safeguards, each must trust the others with sensitive assets. Confidential inference eliminates this need by enforcing cryptographic boundaries — ensuring that neither the data nor the model is accessible outside the secure enclave, not even to the infrastructure host. https://www.linkedin.com/feed/update/urn:li:activity:7341501922383695872

๐Ÿ“˜ AI Red-Team Playbook for Security Leaders — Hacken, Blockchain Security Auditor’s AI Red-Team Playbook for Security Leaders offers a strategic framework for safeguarding large language model (LLM) systems through lifecycle-based adversarial testing. It identifies emerging risks — prompt injections, jailbreaks, retrieval-augmented generation (RAG) exploits, and data poisoning — while emphasizing real-time mitigation and multidisciplinary collaboration. The playbook integrates methodologies like PASTA (Process for Attack Simulation and Threat Analysis) and STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege), reinforcing the importance of aligning AI security with enterprise risk governance. https://www.linkedin.com/feed/update/urn:li:activity:7339375352231710721

๐Ÿ“˜ AI Security Market Report by Latio — Security practitioners have been searching for a resource that clearly describes both what AI security challenges exist, and what solutions the market has provided. As part of this report, Latio surveyed numerous security leaders and found a consistent response: interest in AI Security is high, but it’s still unclear what the actual problems are. This report brings Latio’s characteristic practitioner focused mindset to the problem, highlighting what challenges are out there, and clearly stating the maturity of various vendor offerings to the challenges. https://www.linkedin.com/feed/update/urn:li:activity:7325564352156119040

๐Ÿ“˜ Fundamentals of Secure AI Systems with Personal Data by Enrico Glerean — is a training for cybersecurity professionals, developers and deployers of AI systems on AI security & Personal Data Protection addressing the current AI needs and skill gaps. https://www.linkedin.com/feed/update/urn:li:activity:7337637796007817216

๐Ÿ“˜ Security Risks in Artificial Intelligence for Finance — Set of best practices intended for the Board and C-Level by EFR — European Financial Services Round Table https://www.linkedin.com/feed/update/urn:li:activity:7341851628955758592

๐Ÿ“˜ Disrupting malicious uses of AI: June 2025 — OpenAI continues its work to detect and prevent the misuse of AI, including threats like social engineering, cyber espionage, scams, and covert influence operations. In the last three months, AI tools have helped OpenAI’s teams uncover and disrupt malicious campaigns. Their efforts align with a broader mission to ensure AI is used safely and democratically — protecting people from real harms, not enabling authoritarian abuse. https://www.linkedin.com/feed/update/urn:li:activity:7336790322426912768

๐Ÿ“˜ Agentic AI Red Teaming Guide by Cloud Security Alliance — Agentic systems introduce new risks — autonomous reasoning, tool use, and multi-agent complexity — that traditional red teaming can’t fully address. This guide aims to fill that gap with practical, actionable steps. https://www.linkedin.com/feed/update/urn:li:activity:7333874110684348417

๐Ÿ“˜ AI Data Security — Best Practices for Securing Data Used to Train & Operate AI Systems by Cybersecurity and Infrastructure Security Agency — This guidance highlights the critical role of data security in ensuring the accuracy, integrity, and trustworthiness of AI outcomes. It outlines key risks that may arise from data security and integrity issues across all phases of the AI lifecycle, from development and testing to deployment and operation. https://www.linkedin.com/feed/update/urn:li:activity:7331359099193872387

๐Ÿ“… Artificial Intelligence Risk Summit — August 19-20, 2025 | https://www.airisksummit.com/

๐Ÿ“… The AI Summit at Security Education Conference Toronto (SecTor) 2025 — September 30, 2025 | MTCC, Toronto, Ontario, Canada | https://www.blackhat.com/sector/2025/ai-summit.html

๐Ÿ“… The International Conference on Cybersecurity and AI-Based Systems — 1-4 September, 2025 | Bulgaria | https://www.cyber-ai.org/

๐Ÿ“– AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models — https://arxiv.org/abs/2506.14682

๐Ÿ“– VulBinLLM: LLM-powered Vulnerability Detection for Stripped Binaries — https://arxiv.org/abs/2505.22010

๐Ÿ“– CAI: An Open, Bug Bounty-Ready Cybersecurity AI — https://arxiv.org/abs/2504.06017

๐Ÿ“– Dynamic Risk Assessments for Offensive Cybersecurity Agents — https://arxiv.org/abs/2505.18384

๐Ÿ“– Design Patterns for Securing LLM Agents against Prompt Injections — https://arxiv.org/abs/2506.08837

๐Ÿ“– PANDAGUARD: Systematic Evaluation of LLM Safety against Jailbreaking Attacks https://arxiv.org/abs/2505.13862

๐Ÿ“– Lessons from Defending Gemini Against Indirect Prompt Injections — https://arxiv.org/abs/2505.14534

๐Ÿ“– Enterprise-Grade Security for the Model Context Protocol (MCP): Frameworks and Mitigation Strategies https://arxiv.org/abs/2504.08623

๐Ÿ“– Securing AI Agents with Information-Flow Control — As AI agents become increasingly autonomous and capable, ensuring their security against vulnerabilities such as prompt injection becomes critical. This paper explores the use of information-flow control (IFC) to provide security guarantees for AI agents.https://arxiv.org/abs/2505.23643

๐Ÿ“– Common Corpus: The Largest Collection of Ethical Data for LLM Pre-Training — Large Language Models (LLMs) are pre-trained on large amounts of data from different sources and domains. These data most often contain trillions of tokens with large portions of copyrighted or proprietary content, which hinders the usage of such models under AI legislation. This raises the need for truly open pre-training data that is compliant with the data security regulations. In this paper, we introduce Common Corpus, the largest open dataset for language model pre-training. https://arxiv.org/abs/2506.01732

๐Ÿ“– A Novel Zero-Trust Identity Framework for Agentic AI: Decentralized Authentication and Fine-Grained Access Control — https://arxiv.org/abs/2505.19301

๐Ÿงฐ GitHub Secure Code Game — A GitHub Security Lab initiative, providing an in-repo learning experience, where learners secure intentionally vulnerable code. https://github.com/PromptLabs/Prompt-Hacking-Resources

๐Ÿงฐ Cybersecurity AI (CAI), an open Bug Bounty-ready Artificial Intelligence by Alias Robotics — https://github.com/aliasrobotics/cai

๐Ÿงฐ Awesome LLMSecOps — LLM | Security | Operations in one github repo with good links and pictures. https://github.com/wearetyomsmnv/Awesome-LLMSecOps

๐Ÿงฐ tracecat by Tracecat is a modern, open source automation platform built for security and IT engineers. Simple YAML-based templates for integrations with a no-code UI for workflows. Built-in lookup tables and case management. Orchestrated using Temporal for scale and reliability. https://github.com/TracecatHQ/tracecat

๐Ÿงฐ MCP-Defender — Desktop app that automatically scans and blocks malicious MCP traffic in AI apps like Cursor, Claude, VS Code and Windsurf. https://github.com/MCP-Defender/MCP-Defender

๐Ÿงฐ deepteam by Confident AI (YC W25) — The LLM Red Teaming Framework. https://github.com/confident-ai/deepteam

๐Ÿงฐ Awesome Cybersecurity Agentic AI — https://github.com/raphabot/awesome-cybersecurity-agentic-ai

โ–ถ๏ธ Why MCP Agents Are the Next Cyber Battleground

โ–ถ๏ธ Hacking Windsurf: I Asked the AI for System Access — It Said Yes

โ–ถ๏ธ Hey Gemini, improve the APT Hunt run book — AI Runbooks for Google SecOps: Security Operations with Model Context Protocol — This project provides specialized AI agents that collaborate to handle security operations tasks including incident response, threat hunting, detection engineering, and security operations center (SOC) activities by Daniel Dye

โ–ถ๏ธ Living off Microsoft Copilot This talk is a comprehensive analysis of Microsoft copilot taken to red-team-level practicality. We will show how Copilot plugins can be used to install a backdoor into other user’s copilot interactions, allowing for data theft as a starter and AI-based social engineering as the main course. We’ll show how hackers can circumvent built-in security controls which focus on files and data by using AI against them.

โ–ถ๏ธ This is how you make a $100 billion AI worm

1๏ธโƒฃ Kali GPT — Generates payloads, guides the use of Metasploit/Hydra, and explains techniques step-by-step — https://chatgpt.com/g/g-uRhIB5ire-kali-gpt

2๏ธโƒฃ White Rabbit Neo — Automates exploits and offensive scripts. Pure Red Team thinking — https://www.whiterabbitneo.com/

3๏ธโƒฃ Pentest GPT — Scans, exploits, reports. Follows OWASP workflows, automates findings — https://pentestgpt.ai/

4๏ธโƒฃ Bug Hunter GPT — Finds and exploits XSS, SQLi, CSRF. Generates PoCs step-by-step — https://chatgpt.com/g/g-y2KnRe0w4-bug-hunter-gpt

5๏ธโƒฃ X HackTricks GPT — Trained with hacktricks-xyz. Offensive and defensive techniques in context — https://chatgpt.com/g/g-aaNx59p4q-hacktricksgpt

6๏ธโƒฃ OSINT GPT — Finds leaks, analyzes social networks, dorks, domains, and more — https://chatgpt.com/g/g-ysjJG1VjM-osint-gpt

7๏ธโƒฃ SOC GPT — Automates analysis of SIEM alerts, ticket generation, and responses — https://chatgpt.com/g/g-tZAEuGaru-soc

8๏ธโƒฃ BlueTeam GPT — Designed for defenders: anomaly detection, hardening, MITRE ATT&CK — https://chatgpt.com/g/g-GP9M4UScu-blue-team-guide

9๏ธโƒฃ Threat Intel GPT — Summarizes threat reports, analyzes IOCs and TTPs in seconds — https://chatgpt.com/g/g-Vy4rIqiCF-threat-intel-bot

๐Ÿ”Ÿ YARA GPT — Writes and explains YARA rules for advanced detection — https://chatgpt.com/g/g-caq5P2JnM

If you’re a founder building something new or an investor evaluating early-stage opportunities — let’s connect.

๐Ÿ’ฌ Read something interesting? Share your thoughts in the comments.

Read more on Medium

This news is powered by Medium Medium

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook

Like this:

Like Loading...

Related

Pakistan to train 1 million non-IT professionals in AI under nationwide productivity drive – Profit by Pakistan Today
Aria airdrop launches $APL token backed by $10.95M in music rights
IMXI Stock Surges 54% As Western Union Buys Intermex in $500 Million Deal – International Money (NASDAQ:IMXI), Western Union (NYSE:WU)
IRAEmpire LLC: Best 401k Gold Investments: How to Invest 401k in Gold
Best Upcoming Crypto for 2026: 9 Picks Where Apeing Delivers 10,000x ROI

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Email Copy Link Print
Previous Article San Antonio Spurs Partners With Ledger for 2025 NBA Jersey Branding
Next Article XRP Ledger 2.5.0 Update Brings Major Enhancements to Escrow, DEX, and AMM Security – TokenPost
ยฉ Market Alert News. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Prove your humanity


Lost your password?

%d