- The Threat of Adversarial AI - wiz.io— wiz.io
<a href="https://news.google.com/rss/articles/CBMiekFVX3lxTE8wSzRsRk80RUtXOVRfMFZTN2RjYkZ2amFicDM3V25VVEJyTTd5RDU2TXB4TXFzYUh3cWxsdDhTbEVGRkg2ZFI1cmhCVHBqalIzX2tFaWdpSDBSRmxUeXYzRmMzOUx6OVFGVWwydTZ5ZHdia1Y0ZFJqYjFB?oc=5" target="_blank">The Threat of Adversarial AI</a> <font color="#6f6f6f">wiz.io</font>
- Project Glasswing and AI Driven Cybersecurity Shift - usthadian.com— usthadian.com
<a href="https://news.google.com/rss/articles/CBMihAFBVV95cUxNYjdSbHVpSjFiNjUwVWQ1c1g0T2dtNHZrMlFqVXhlUmdSMThEakRFNU1kMERLcl9pQjJoS1RGcnUzcUpzWFVyR1VxWHduWHpFUnpaMW5HWllCZEtoWk14WWVTbUJBRVk5YUJCVGVkejlRR0lreHRMdHE2d25QTTBHeGt5aEE?oc=5" target="_blank">Project Glasswing and AI Driven Cybersecurity Shift</a> <font color="#6f6f6f">usthadian.com</font>
- Adversarial Examples in Computer Vision Guide - Blockchain Council— Blockchain Council
<a href="https://news.google.com/rss/articles/CBMijwFBVV95cUxOX2xzNlNvVS1ZbUxScWl2NGNuZ3hQcEZ6TF84cnlTemU1VTZDNkRMQjdFNkRrejU3WXoyNXJxRWtmS3h6T2hLY3BkLWlFcWRiSmMxWUFORjB2aUUyZ2wyZGJfRy1QbnpsRDlQMzQ3STdKVE5hTmhjRG15YS01UXMxazRiNHFCTGZmdzNMMEdBVQ?oc=5" target="_blank">Adversarial Examples in Computer Vision Guide</a> <font color="#6f6f6f">Blockchain Council</font>
- Securing Robot Vision: Leveraging Adversarial Detection for Data Poisoning Defense | Newswise - Newswise— Newswise
<a href="https://news.google.com/rss/articles/CBMitgFBVV95cUxQV05ZZVdDblNmQWp0TGZQT3FyQ3ZZTjFDMjJVeE54Rk56T01pNkJDWjU0cVBzLVdJX0JLTHVHYkJhQWRncG1zM1M3amxYMUpNeklNazRzN0VQZktDVy1uQ1NzNjJuLTFKbVJjNWZQZjdDLUdtWU1lSWR5RVJWMEFfNTQ2QVZEQVRHRllxTldGSFUtdkEyVE9zbWZzeWJISGxCUG9vNEJYQUx5QTRBbUlEQklhWDkyd9IBtgFBVV95cUxQV05ZZVdDblNmQWp0TGZQT3FyQ3ZZTjFDMjJVeE54Rk56T01pNkJDWjU0cVBzLVdJX0JLTHVHYkJhQWRncG1zM1M3amxYMUpNeklNazRzN0VQZktDVy1uQ1NzNjJuLTFKbVJjNWZQZjdDLUdtWU1lSWR5RVJWMEFfNTQ2QVZEQVRHRllxTldGSFUtdkEyVE9zbWZzeWJISGxCUG9vNEJYQUx5QTRBbUlEQklhWDkydw?oc=5" target="_blank">Securing Robot Vision: Leveraging Adversarial Detection for Data Poisoning Defense | Newswise</a> <font color="#6f6f6f">Newswise</font>
- Attentional semantic attack for enhancing adversarial samples transferability - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE5sdnZwcnR4SmNqRGFJRTlQUDFlNkVuT3daa29UYUhmTlNTNjVoWS1BbUZmSmRwOG85bGRMOHh1SnRVWDJ5NUNSUE5oZWxaNzJUMUFvZ2wzSDVNWjJDSlVr?oc=5" target="_blank">Attentional semantic attack for enhancing adversarial samples transferability</a> <font color="#6f6f6f">Nature</font>
- NDSS 2025 – Revisiting Physical-World Adversarial Attack On Traffic Sign Recognition - Security Boulevard— Security Boulevard
<a href="https://news.google.com/rss/articles/CBMivgFBVV95cUxOQmxZMGhNa04wRUxUdk9QTWNvcW1jMTE1b3lCanNUdk5MR1NEUUI3UXFRZU42RlpCLXdQWS0wbmpLWUlVaWQtQzBLaHU2SEFKaWZsY3IxeGZQd0dhcUtiUGtJRmZuclc3aXFyVExYVW1ZSWFWQnF3VU9ydzBhSmtTNlFEU1oxSjE0blZaS2xfS0YtTW1ZX1R5Y1dhSXRscWw1alJ0ZmZjYU0tWkZiSnpxeWhWZUJIQWt1RlZabi13?oc=5" target="_blank">NDSS 2025 – Revisiting Physical-World Adversarial Attack On Traffic Sign Recognition</a> <font color="#6f6f6f">Security Boulevard</font>
- Self-purification: Enhancing adversarial defense by leveraging local relative robustness - ScienceDirect.com— ScienceDirect.com
<a href="https://news.google.com/rss/articles/CBMie0FVX3lxTFBGczkxdjZYdGhrSW1iWFd6RklJWVV2dkZ2T3BaUTB2N2NNdDlHV0JHU3FNUHVmZkUtR3M5Q1Z3emNXeF9XbFZwOEJ4c3BWV09kdVFHRldMWXMyWXE5bFhVemlEUEJ1dlZKN0Uzdi1ZMTBlcWN1T3BnTmVVdw?oc=5" target="_blank">Self-purification: Enhancing adversarial defense by leveraging local relative robustness</a> <font color="#6f6f6f">ScienceDirect.com</font>
- Adversarial Attacks: Anthropic Says Chinese Labs Distilled Claude - AI CERTs— AI CERTs
<a href="https://news.google.com/rss/articles/CBMimAFBVV95cUxQRXBXczZpVFY5aFlYRllUendrUkJFRGF4aFA0Tm5sY0ViMmRWbERsT2otYWpjOXRjSUZNS21Ld1RtaXFXZDZwMXhscmoxM2JIMjJ0NkNOSDljUG5rSnJiUi1wRnItWE5OYjh6YUpOc0xJSTczWTNROGZSZndva3FYZ29aRjFzVGxZUHJmNldYS1ozclp5azlKTw?oc=5" target="_blank">Adversarial Attacks: Anthropic Says Chinese Labs Distilled Claude</a> <font color="#6f6f6f">AI CERTs</font>
- Enhancing adversarial resilience in semantic caching for secure retrieval augmented generation systems - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFA4TWYxZkRtaF8wbzRrZnFrVGZ1X3d2d2RqSE9MazdHbldQSmdGaG5Kb2dUWHRSek51bTROY2NKcHB3YU9rb25VSlFZakQ1WTNxZlAyZENiZUFoM3FTblhN?oc=5" target="_blank">Enhancing adversarial resilience in semantic caching for secure retrieval augmented generation systems</a> <font color="#6f6f6f">Nature</font>
- NIST Finalizes Cyber Attack Guidance for Adversarial Machine Learning - Hunton Andrews Kurth LLP— Hunton Andrews Kurth LLP
<a href="https://news.google.com/rss/articles/CBMixgFBVV95cUxQNkNFcjBiS2lIMHJYcVR0NDZaR1BvVDdmQ1NFQVMydmNCd09QVVNBQ09ZN3oteTRHS3B3Mm9yamxFOG1NTlpoeVBWSFkta3NxRTFrTG5PNV9DSF81ZG9Uc3hyb3Q5NVJDa2JuRU4xSjJsdDcxQjJfMlJqd0dxN0toZ0lUMlhmR3p0eU9sODhPTWJVZWs4VjVqTG9sVi1mTWxlNTBXX3VxczdzWHNJMHgxWF9QUjhCelVnYldpdG16U2d0M19Ecnc?oc=5" target="_blank">NIST Finalizes Cyber Attack Guidance for Adversarial Machine Learning</a> <font color="#6f6f6f">Hunton Andrews Kurth LLP</font>
- AI Security Guide 2026 - Blockchain Council— Blockchain Council
<a href="https://news.google.com/rss/articles/CBMia0FVX3lxTE0tTkNhckRZQWRZZDNKQ2QtZk9qTHROZUo0LThZRGNhczFzRmttaDBZOVBGTllHOUE0ZXNHd25iYWswMmN5YW54cE9lUnZGZkQxMHo1WlcwdG5FRmZiTHlaNl9mU1NoUlVScFI0?oc=5" target="_blank">AI Security Guide 2026</a> <font color="#6f6f6f">Blockchain Council</font>
- Protecting Data in the Age of Cyber Warfare - CIRSD— CIRSD
<a href="https://news.google.com/rss/articles/CBMie0FVX3lxTE00N09lQzhheFpIUjZTel84NFpxazlCbGVpc094eDdTaHFEV2p5VkdqRjllSG5GTjNDcVNqcmR5U1FBVjdZdHphTjZLa3RBaUdNSTB5QnFUQVRSTTdpT0xmbGFIYnpHcjFQOUQ1aVQxOXZweWRIWHZXdEM2SQ?oc=5" target="_blank">Protecting Data in the Age of Cyber Warfare</a> <font color="#6f6f6f">CIRSD</font>
- Query-efficient decision-based adversarial attack with low query budget - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1vek9RT044QkVKejZJSjBvZkJkSGFKMlhVTW82V05FQjFHTENsZGkxQkRlV2FFWnVNd1EwSkY5VHdBSXB6V2xrM1VkV1kzTFNQVEc0OGw2dXg0OU11ci1R?oc=5" target="_blank">Query-efficient decision-based adversarial attack with low query budget</a> <font color="#6f6f6f">Nature</font>
- Blockchain-enabled identity management for IoT: a multi-layered defense against adversarial AI - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBtQ1R3ZXJKMDBldFF6SDZ3T2J3b2gzNG1TOHRycnhWRm91dFJfeVNmOEFPM0czVzNTdWRzUnhhWHpKazRZZDJFLUxieUd2N2l1a21JRkJoQkhkSVY3QUtv?oc=5" target="_blank">Blockchain-enabled identity management for IoT: a multi-layered defense against adversarial AI</a> <font color="#6f6f6f">Nature</font>
- Adversarial robust EEG-based brain–computer interfaces using a hierarchical convolutional neural network - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE56UTZtY1Q2anZQb0k1R3BKZUNTUzU0aE96RUF1NEh2RkpzUzl5YW1uZlBRWVQ3VFBZbEtDYTl0eU8xRmswdDh5WXhxSDYtTEhVM2V0cG51Z3JRckE1U0h3?oc=5" target="_blank">Adversarial robust EEG-based brain–computer interfaces using a hierarchical convolutional neural network</a> <font color="#6f6f6f">Nature</font>
- Dqas Achieves Robust Quantum Computer Vision Against - Quantum Zeitgeist— Quantum Zeitgeist
<a href="https://news.google.com/rss/articles/CBMif0FVX3lxTE1HZ1BHX1lYa3VQZG5oUjRZNEZUVjZUelZGeVdYQVhhbGZtVVFEdF95U21Eal9pZnkwck9yU0dUd1lCd29FVDIycmh3dXZGV2JnSUNVcGFkNmQyZEhVaDlsTm1vV0F2TGJ3TVdvazFrb205TDQtazlQNkNmbVVEd1U?oc=5" target="_blank">Dqas Achieves Robust Quantum Computer Vision Against</a> <font color="#6f6f6f">Quantum Zeitgeist</font>
- Updating Classifier Evasion for Vision Language Models | NVIDIA Technical Blog - NVIDIA Developer— NVIDIA Developer
<a href="https://news.google.com/rss/articles/CBMikwFBVV95cUxQUkNZRS1MYi12YkNIbUxhSVpIcGxMa1JteElxbkIzb1hhSng4Q0xJLVcxckNkcUdaVV9zWFp3Ym9EM1VJdVJucVlFM2psUDNyZnNmX2NoX0dFVWh1TUk0Y3RwQ013YTVFWjRTZ2JJekNtRHhXeExxbDhPcVV4OE84Vmx1d3VoVW0ydVViS0ZfTC0xLTQ?oc=5" target="_blank">Updating Classifier Evasion for Vision Language Models | NVIDIA Technical Blog</a> <font color="#6f6f6f">NVIDIA Developer</font>
- Evaluating gait system vulnerabilities through PPO and GAN-generated adversarial attacks - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE0xWDhVbEZDendqMzNad1lFQW92Rnk4ZmFLNTMzdW9pQ2dWLXJ6N2MxeE1uWGNMRFh2STZSSl9iQUtZMTI4bFl6ZHdaN3dWdm5mNFV4ai1kYkRqbWZ3dEVJ?oc=5" target="_blank">Evaluating gait system vulnerabilities through PPO and GAN-generated adversarial attacks</a> <font color="#6f6f6f">Nature</font>
- Stress-testing AI vision systems: Rethinking how adversarial images are generated - Tech Xplore— Tech Xplore
<a href="https://news.google.com/rss/articles/CBMihwFBVV95cUxOeEFHbHNrNTJ5SEJLcFJDMlBaanE3UHdWY1BicjVQX05qQk1uem1BeUN1akdQRXNDejhPMC1jSU1jWkk5Y0llU0JsTExrcFBLS1lfemdlVERmRzBPODBiTkE2Y1NKOC13TmZVLTJzT0lnNE1FUlpFYmRrX1FpcjJGU3ZVaFIwR1E?oc=5" target="_blank">Stress-testing AI vision systems: Rethinking how adversarial images are generated</a> <font color="#6f6f6f">Tech Xplore</font>
- Dialectal substitution as an adversarial approach for evaluating Arabic NLP robustness - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBqYVRlc2YtbzFNQU9RY1NxOGgxUWxpVVZZUjRuUlIwMnkxX0htVnNsSk9kZlZ2cHVXZXJlN0E5UlZSZ3QwUnRXSnVsS0haQTJRMFlxVllFeWE1Yk1UUWRB?oc=5" target="_blank">Dialectal substitution as an adversarial approach for evaluating Arabic NLP robustness</a> <font color="#6f6f6f">Nature</font>
- Adversarial robustness guarantees for quantum classifiers - npj Quantum Information - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1TWFoxTlJ6VHVwQy11SVZLR3ZUVTJDQkV0TTkzNjhfOC1GeUZockZUYmlxdTRWNnpEYWlYQmh4cWZ2OEhWZmZ4VFdnbXozWGo4UU84UTZJOHVMc3NaVElJ?oc=5" target="_blank">Adversarial robustness guarantees for quantum classifiers - npj Quantum Information</a> <font color="#6f6f6f">Nature</font>
- Optimized CatBoost machine learning (OCML) for DDoS detection in cloud virtual machines with time-series and adversarial robustness - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBCUTdfQkctSjhJaVlLOU5lYkNnd3dLM3VhRExZOExsZWo1X1F4WXdiRE1yOVM4RHlrSUZEZjcxMUNIdl9SeWxmSVJWdHhfXzY3aHhMdmo4X1dEbU1Mbk84?oc=5" target="_blank">Optimized CatBoost machine learning (OCML) for DDoS detection in cloud virtual machines with time-series and adversarial robustness</a> <font color="#6f6f6f">Nature</font>
- Nexus scissor: enhance open-access language model safety by connection pruning - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBxUldOY1lqLXFaaHZBa1RsU29MRUtqU2N1cG9SU25SYWNRQXQ1MDdpY3JsZnZibXFuRzl3TGlWYWFmREFmWVg4TDA3aGoyYVNQWWZXVnFoNk9zRHp2T1Br?oc=5" target="_blank">Nexus scissor: enhance open-access language model safety by connection pruning</a> <font color="#6f6f6f">Nature</font>
- 2026: NATO funds UOW research to protect drones against cyber attacks - University of Wollongong - University of Wollongong – UOW— University of Wollongong – UOW
<a href="https://news.google.com/rss/articles/CBMiowFBVV95cUxOMGFjb0hKa0trTjJjNXJWeGh6ZUdvTEUxMXdEalN4c3FqM3J2X3Y5Q0hDT2E1NUliaUYyQ0tmQWxpVWZSakhoTmFOM2s0c2NaMjlqYWhIOTVyQjlGM1RWYUpBa3pfREdaUkFDTXJneUFUV09FbndPWDJFRjNpb2FRS2tjSjF1N3FlU0RDUjMwajMxRWtDay1PWFlPdVVYbE0yV1dZ?oc=5" target="_blank">2026: NATO funds UOW research to protect drones against cyber attacks - University of Wollongong</a> <font color="#6f6f6f">University of Wollongong – UOW</font>
- Continuously hardening ChatGPT Atlas against prompt injection attacks - OpenAI— OpenAI
<a href="https://news.google.com/rss/articles/CBMidEFVX3lxTE9BeDVtWUpQWmljd2lscmZRTkFtUzl2Vi14SWxBWDFrZTI3UGdsQTlkYXZRMVdYQUN6dG9ROGZtSDhrdmJ5cEdzZzZjYVJqNm85NDV5bEdRSGtrbWNCejZBYm1abURVNHl2eWZKdmJXZ0FJTDlM?oc=5" target="_blank">Continuously hardening ChatGPT Atlas against prompt injection attacks</a> <font color="#6f6f6f">OpenAI</font>
- Adversarial AI preparedness in defense and national security - Guidehouse— Guidehouse
<a href="https://news.google.com/rss/articles/CBMijgFBVV95cUxQcFVYNDBOMVZIRUV4U0xoVWNuQ28zOXcyVG8xLWNpaGNRLW5kUlI2TWFOZGZ3NnhXekxTcWNmdjRROW8zcGMtMkFVV2JiNjdFOVFrQVpnYi1XcDMwLWNlT1I0V1RkWjhZZG52VGQ4Ul92YWF1aDVwV1lKUUdQSWdFNVZ0ZF9QRUdmeHVBMndR?oc=5" target="_blank">Adversarial AI preparedness in defense and national security</a> <font color="#6f6f6f">Guidehouse</font>
- Hybrid GNN–LSTM defense with differential privacy and secure multi-party computation for edge-optimized neuromorphic autonomous systems - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE8tRnFMc3VRZVc4ZEJwQVQzOWN3NmxsZlR5ci1zQ3JUYnNhMlg0RnNoM2tYVk9qaVBFYWNiWGd3WVlscm1MdlJabGJoMXk0VjJvX2RWNEJiQTFPNjNsdl9V?oc=5" target="_blank">Hybrid GNN–LSTM defense with differential privacy and secure multi-party computation for edge-optimized neuromorphic autonomous systems</a> <font color="#6f6f6f">Nature</font>
- Enhancing tumor deepfake detection in MRI scans using adversarial feature fusion ensembles - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE5uM3N5Q3RDRHdlZFB6Z1NTQnhsUkF1NE1FS3UxQzZxTW9FVlR2V2FUckRZR05SeWItc2pWeGphR2NEOVZpdjZiQjdlMl8xNEtiOS1WR2xJNjc5bFJMNk80?oc=5" target="_blank">Enhancing tumor deepfake detection in MRI scans using adversarial feature fusion ensembles</a> <font color="#6f6f6f">Nature</font>
- Dual-targeted adversarial noise for 3D point cloud classification model - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE5zQ1J0YjQyOTlmelpfZkJ0cUhqVVlIM3NVcjktYWtZaE92eGpzWG5kWUswYXNVVjZCaU9kaDNSVEZHcy10RERoamcteUVycHBBUHlQYWpmdTduWGVSWWJB?oc=5" target="_blank">Dual-targeted adversarial noise for 3D point cloud classification model</a> <font color="#6f6f6f">Nature</font>
- Segments-aware universal adversarial perturbations purification on 3D point cloud classifiers - Frontiers— Frontiers
<a href="https://news.google.com/rss/articles/CBMimgFBVV95cUxQSTh1UGhid2tZRDNpRDZEbFhnam5IeXRObTk2eEM1WVU2WV9rTHJ0N2hiNElja1dhNUo2NC01TllTaXhpUmpPa0FVdFBXaDUyRzFmaVU4WGFVVVd5MUZ2YjNma3pqQlBjQ1ZQdkxfY201eDMyenFwY29ESHA0bldmeE1tN2hCb3VndHltZ3JZWXMycS1NTnJ1WXFB?oc=5" target="_blank">Segments-aware universal adversarial perturbations purification on 3D point cloud classifiers</a> <font color="#6f6f6f">Frontiers</font>
- Neuromorphic computing paradigms enhance robustness through spiking neural networks - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE90Vnk1NGVpeXRDN2V6V1NwbHVVeFhhTTJZRU1XbTVhVW93Y1pkTmZhdWxoUVBocmtGNDViWTdXWnhTUnNLMDJXLWJhN09ReUtkRW11VVhGQ3k0a0FtWEVB?oc=5" target="_blank">Neuromorphic computing paradigms enhance robustness through spiking neural networks</a> <font color="#6f6f6f">Nature</font>
- Hybrid framework for image forgery detection and robustness against adversarial attacks using vision transformer and SVM - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1KclB0ZWN4S1hqdU5VX1YyUnlpSGlETmtpaGRaLVg4aTFEUm5CWmVNcS1EQ2lnejRpRDFGeHpOcVpsQ3RROHIzYmtCMEpKRnp3VGRnWGhOU0ZOcEtnZ0s0?oc=5" target="_blank">Hybrid framework for image forgery detection and robustness against adversarial attacks using vision transformer and SVM</a> <font color="#6f6f6f">Nature</font>
- Voice Deepfakes and Adversarial Attacks Detection - Biometric Update— Biometric Update
<a href="https://news.google.com/rss/articles/CBMikgFBVV95cUxNcWREdEF6T2tQTS1OclJwT0hGSUdTMGVBdFdUSkxfWEx6Nk9Vbi0yN3NVMks4NEs1ZDc0c2FjaGFaSUw3SW1felRXZnRKNmd6TW42Nk05VGtMVE11UWROSWYtNEFkVE9Hb0VNVXc1VDFnbm44TVRLX0VrMUd0NTVKdlBWTWhVMjNkNTZXQ0JPWERrQQ?oc=5" target="_blank">Voice Deepfakes and Adversarial Attacks Detection</a> <font color="#6f6f6f">Biometric Update</font>
- Disrupting the first reported AI-orchestrated cyber espionage campaign - Anthropic— Anthropic
<a href="https://news.google.com/rss/articles/CBMiZEFVX3lxTFA5dFRNdGRsMmVuU1RsMmI2dUlFUG9fUVEweEJsQ0hZUTFTNnk5NVk1a3QzWG5jUFVnVFY2bGEwSnlhQWk3bHd0c0NnOXAyS0hMelA3MW9ZTldsOXJ5VURESHJYNWg?oc=5" target="_blank">Disrupting the first reported AI-orchestrated cyber espionage campaign</a> <font color="#6f6f6f">Anthropic</font>
- Investigating vulnerabilities of gait recognition model using latent-based perturbations - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBmbndtMGFMT0V3Z1pndGcxbzlodjM2aXJxMS1VZjVqalNsX3A2TjYtWFFtVTEwby0xaXkxU2RqbnZFd1RZVzgzX2NXVTV2RVhOc0N4OVRtV3NmVkt0Y1J3?oc=5" target="_blank">Investigating vulnerabilities of gait recognition model using latent-based perturbations</a> <font color="#6f6f6f">Nature</font>
- Popular LLMs dangerously vulnerable to iterative attacks, says Cisco - Computer Weekly— Computer Weekly
<a href="https://news.google.com/rss/articles/CBMiswFBVV95cUxOSldZQlExUU9LSkZkcFYwbC1VZmd2TGxyUUM0Y0thaUVYN3hXcEdPdkdwXzRuaThMTmNUSkZKenF4NWswXzVEcVRNQ2ljY2l2VzJDOEFVM01ma1VpMFVRbHprcld6OVVnOV9SU0hSRWVvRF9VZW9uYmJrWGNBRnFWU3NLVDdReWdZNHlGeDlWVjgweHlwMmtBakF3c0FkS0doYVI5QzZJVXJnSW9VbVNlT0Y4Yw?oc=5" target="_blank">Popular LLMs dangerously vulnerable to iterative attacks, says Cisco</a> <font color="#6f6f6f">Computer Weekly</font>
- Multi-Turn Attacks Expose Weaknesses in Open-Weight LLM Models - Infosecurity Magazine— Infosecurity Magazine
<a href="https://news.google.com/rss/articles/CBMifkFVX3lxTE45RVNRSWpnbzFZejZJc3VMeHpycjRnMjVFMS12bUY0ZHFVMEQ2aGMxNTc5VW1oaG9zQWE5M20wYVZ2NUppN3QxaE9Ud1oyMW1DaDFJV0dPbGlYYkdRNjBoMGplN3hnR2JqTjlHcU05RmIxR19ucG5Oa3NFbXZtQQ?oc=5" target="_blank">Multi-Turn Attacks Expose Weaknesses in Open-Weight LLM Models</a> <font color="#6f6f6f">Infosecurity Magazine</font>
- Death by a Thousand Prompts: Open Model Vulnerability Analysis - Cisco Blogs— Cisco Blogs
<a href="https://news.google.com/rss/articles/CBMibEFVX3lxTFB1Z185c0JVQklaLTlUakpabEt6dXJCdFJQcHQ0RGxzU1pfSGVGblZUWkVzVTJRQUl4UjBybEZER0xvd0tqTXhBRTJBaDU2T2ZlX0ttQmwyX2hBaWFWaEJjTTdFQkZqU3pZX1VIMw?oc=5" target="_blank">Death by a Thousand Prompts: Open Model Vulnerability Analysis</a> <font color="#6f6f6f">Cisco Blogs</font>
- Adversarial susceptibility analysis for water quality prediction models - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFA2NEhYYklUM1dVc2FrRGpsZjJiVUR5WmZVXzFvQnlsajhORWRyM3BRLXlHQmJaa1BmdEVzQ3B5NVByQWJqN2o0ODU5X1ZJX0w5UDZiUkwwWFEzbVpsNmQ4?oc=5" target="_blank">Adversarial susceptibility analysis for water quality prediction models</a> <font color="#6f6f6f">Nature</font>
- How can you protect against adversarial prompting in generative AI? - eeworldonline.com— eeworldonline.com
<a href="https://news.google.com/rss/articles/CBMingFBVV95cUxOUUhfVW5KNkJUS2tzNkN6MkVkaldtWFJJM0dWTWFZWmNuVE5uX1FNaGg0VTF0VXQ3dWV4NmNHNlhXYzVNSHhEOGI3VFBueExqMF9qUDBzbnJaRERlb0xTOWtzR3dFRWlpLTRpVzNTRWRiRjJLZ1VlLXU0UzM4VVl0YWtGRGktRlBEUjNNLWJOSFV2ckphUFpoa2JBZGdwdw?oc=5" target="_blank">How can you protect against adversarial prompting in generative AI?</a> <font color="#6f6f6f">eeworldonline.com</font>
- Researchers unveil new tool to detect stealth cyberattacks on critical infrastructure - Texas A&M Stories— Texas A&M Stories
<a href="https://news.google.com/rss/articles/CBMi0gFBVV95cUxORFk0RkQ0cWVYTHdUUUpIZ2JoMnAxRHNHUlc0QXRsc05CQzN1UlJXdXNUZ01XMmhjbldYeGdtVzA2R05aWWVHRTFtS0xReV9lR2t5UXcwMTVQNTNlVzVkc1ZzbmlwQm5POVNuY241T29aV3pvOXZoY2hQSE9MSEpmclhXMmZjRGFrQTV6UVJ4OW1OXzBmclE3SVhXWXJiejRBcy1SREItRUVuYzdiUU50eXByWmQ2UzNFUnhGMmN3OTFQdkdmU2lob0JEQ01rN1huMnc?oc=5" target="_blank">Researchers unveil new tool to detect stealth cyberattacks on critical infrastructure</a> <font color="#6f6f6f">Texas A&M Stories</font>
- Quantum Enhanced Adversarial Robustness Achieved - Quantum Zeitgeist— Quantum Zeitgeist
<a href="https://news.google.com/rss/articles/CBMi1gFBVV95cUxPR3hPX1BIZFc2ZUtEN2duOHlWbWsxZE80NTJfbG9lS0l1SDUyUHNCSGM0VE53RGc5X0hIaXNCaHpUTlREUlJiOUkzQWNkSTl4MkJIM01jclN3X09ZZFJyeVM5VEk0TXFCQ2pLOEUzeTRGR1JfY01ybGptT1FLeEVSc21Sd0JGQU4zTFJvdmMwbTI1cTJiSlRZYU9VaDdrT0tyTUhKSC1hTVNSVkJvalJtcXg2UXN1SzlUT29MblZFTk9kSHJKUTlBOGVYMWlnOVRHMUR3QkdR?oc=5" target="_blank">Quantum Enhanced Adversarial Robustness Achieved</a> <font color="#6f6f6f">Quantum Zeitgeist</font>
- An incremental adversarial training method enables timeliness and rapid new knowledge acquisition - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE5vX25meTVQRUxLd2pjWWV5V1FKOW5KbXMtc3pfQkZBaHVlNHp1OGVQRllZS05KREdiOWtlTGZTRjlNdy1xdnBSMjl2dG4wbENZNkhPYV9uT3A4eWo5TFRF?oc=5" target="_blank">An incremental adversarial training method enables timeliness and rapid new knowledge acquisition</a> <font color="#6f6f6f">Nature</font>
- Adversarial prompt and fine-tuning attacks threaten medical large language models - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBsQlJLOUFYZXZLU0ZGWm0wZEo1MnRYN3dOV3MyRXFVbmoyTXgtYnkxR1hqb0Q1SHJHRTRuOHRMb0JRdGowbXZ2RlJBUTlUbXpnM05ucU9OZXFpa0F5NS1V?oc=5" target="_blank">Adversarial prompt and fine-tuning attacks threaten medical large language models</a> <font color="#6f6f6f">Nature</font>
- Detection of unseen malware threats using generative adversarial networks and deep learning models - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBqOG9WbW9NVlpFTGtOdF9FczRjU1ltRDdVRWhqZFNHTk9MUmppOHc1dUZfaVlrZlJRcENrSG92X2lPQ1R0MjB2MUhXUlN4a2tlTWhDaWxSaE52MEZWY1dr?oc=5" target="_blank">Detection of unseen malware threats using generative adversarial networks and deep learning models</a> <font color="#6f6f6f">Nature</font>
- AI poses risks to national security, elections and healthcare. Here’s how to reduce them - The Conversation— The Conversation
<a href="https://news.google.com/rss/articles/CBMivAFBVV95cUxQMTc0WVNzT1hRS29vNW5KZ1ZtQ2k1Rml5X00zQklNTDFJZ1FFcnVWUXJJOGRDalRLNnR2RjhLTS1YTTBKSk1WTk1lLWtmNW1NLUtZVHh1RlMwd24tbFZkVEVxa1JSMndsVEcyb19FdUI4MWo0RGFTeVF2eXVLRUVqek1NbVNBSG1vbGVEVlZnakVKN053MkhBN2pIaVJNQndnZFdyTXFXY1NPMlJ0NnRXYWFONGMzekUwX0lMUg?oc=5" target="_blank">AI poses risks to national security, elections and healthcare. Here’s how to reduce them</a> <font color="#6f6f6f">The Conversation</font>
- Adversarial natural language processing: overview, challenges, and policy implications - Cambridge University Press & Assessment— Cambridge University Press & Assessment
<a href="https://news.google.com/rss/articles/CBMijAJBVV95cUxQSGNaQVNfRFY1dEFkM1JET09NVFRfaUdlc1pSdGFHdDlUNElzTFJsMVdRYjc1Ympab3RPUXc5RXZzWkY4cXU1akpwR1I5TlBYZEdyb2lZOVhGUlhhcmxQYi1PbGU4VU0yZ2JKN0h3NlZkUUtEajU3UUdqeG9GeFlYS19nb0FTUnNTdk9qMDdmdU1WTHgzMWhMSDhUcTh3WnlyMFp3enZtWDVYYnNRZlh2VEl4dXg5S25mcWZEbnkyamZQM3B1WDBmN2xCREtIdmxDWWJyTDllUm9yOHZjZnNmRzJqeWxOcWZ6cElMTGFEbHpjYlY4b3NDRFhoRTZGT1VpNkFoLXdkdUZhbDc1?oc=5" target="_blank">Adversarial natural language processing: overview, challenges, and policy implications</a> <font color="#6f6f6f">Cambridge University Press & Assessment</font>
- MTD-AD: Moving Target Defense as Adversarial Defense - IEEE Computer Society— IEEE Computer Society
<a href="https://news.google.com/rss/articles/CBMieEFVX3lxTE00S1FELVdISUdjMWlLWWRTdFFFQmlPVjlncnc3TEVwRGJYOFI0c0hnZV9hSkJ5ekMxYjNqMFVaTngyVENkQjRtQ295d0pvZW9VenRueXlaNU9jZWZvcmRFRkFtTVVzSFRlTEk0Z1MtSjJoRmhtOFdLWQ?oc=5" target="_blank">MTD-AD: Moving Target Defense as Adversarial Defense</a> <font color="#6f6f6f">IEEE Computer Society</font>
- Diversity-enhanced reconstruction as plug-in defenders against adversarial perturbations - Frontiers— Frontiers
<a href="https://news.google.com/rss/articles/CBMiogFBVV95cUxOZkFnWXQzd2VxX2E1ZmFrNkFiNWRaY0ZLcDhSYTNXRG1yck9tRW44eG5MTG1kR2lnNHlsRFM0NExUc3FjbUttVzN1eFE2WDZvVFdKcEhReFhvbnpudDllWEdvekVSakJMYnpNdWpKWndBTmRIOGFEcGFya3FtZDF0T2R3blpqMDNvVEY4eFQyelVPSE54WVB6LWRSTFJqNWJDbHc?oc=5" target="_blank">Diversity-enhanced reconstruction as plug-in defenders against adversarial perturbations</a> <font color="#6f6f6f">Frontiers</font>
- Identifying significant features in adversarial attack detection framework using federated learning empowered medical IoT network security - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1mR3phTmZGcjA2NXFIVG9pRjh4NHFYUjRBSGhQTjBGSW5TVVEzbF9RdWxEMEg2MHZxdW1sRm5MeGxDci1ZQk0xNmVBcGN3b3lmSjBIUUhJWnYtZ3hfLTNj?oc=5" target="_blank">Identifying significant features in adversarial attack detection framework using federated learning empowered medical IoT network security</a> <font color="#6f6f6f">Nature</font>
- Review: Adversarial AI Attacks, Mitigations, and Defense Strategies - Help Net Security— Help Net Security
<a href="https://news.google.com/rss/articles/CBMirAFBVV95cUxOSnVoR0Jxc01MamgxNmNfY0R0MThIeXFmeDl0YkZXNVdYRS1wOGpZU2QwckdwZWphMlZMVTR1M3V5WnpUeElleGdNQmZVaU50NmFVN2hOQUZhSC0wUFNZdzZwemlSeVV6U1dXOU5RRG9wblBnSDk5eWV1UjFaYkpTaHFBUGN6Sklnc09Tb0d4c1VweXg0eXhaZTRJTjRsQ28wZVFzYUpBU3ZGUnJq?oc=5" target="_blank">Review: Adversarial AI Attacks, Mitigations, and Defense Strategies</a> <font color="#6f6f6f">Help Net Security</font>
- A comprehensive survey of deep face verification systems adversarial attacks and defense strategies - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1DVDhhX1BzblJOX0JFUDRTSDBrdE53Z3l0dzhvdVhrcGVSS1Fpb05pbWNsV3ZjOVVnUTRha1JqUlhBcWVuYVVUcTBQMXp0bGpsYWpjYlp4dFg0NVZfTTRJ?oc=5" target="_blank">A comprehensive survey of deep face verification systems adversarial attacks and defense strategies</a> <font color="#6f6f6f">Nature</font>
- Defending Against Adversarial AI and Deepfake Attacks - The Hacker News— The Hacker News
<a href="https://news.google.com/rss/articles/CBMilgFBVV95cUxNTWMzalJ0LTNBb09PdW5ZM0xqeW1DUS1tV0VURkhoTngtanY2MWpYYk1NMFFYLVJ0bDc1eGs2ZW1mVzItTHB6cEo1dkZHcjNIUkplbWdxVEw5M0FhaFF4M2dfX1pOQUpCUVJ1QXdlbzMxQlBidGlUWF9ERTJEX21PTDFXcUhEdTZJUEV0djJ1RHdKVlZpMkE?oc=5" target="_blank">Defending Against Adversarial AI and Deepfake Attacks</a> <font color="#6f6f6f">The Hacker News</font>
- How to Test an OpenAI Model Against Single-Turn Adversarial Attacks Using deepteam - MarkTechPost— MarkTechPost
<a href="https://news.google.com/rss/articles/CBMiwAFBVV95cUxNdVliWEpDSU9vQmZHMl9OcGFNTVVNM05ubVMtcEd6Z2JKNVMtWFdENFdDNXRzZ1p0MWVIcUZBNk1fUWV2SDBybVFuM2I4YjdwYzNzRmsyYkN5ejF0ZWt6YXRBVjM0YlpSdUlobDhLa04xQ1JzSUV2UG9CU3hjVllwWF9INFJUTVFkeVhWa1JyVXMtcVVrbTd4aV9DNHZ3RU1QWWtpYmN2UkFNbUhHMTdBcWVVZmJmLWE0NlZOelQxWFfSAcYBQVVfeXFMT3l0YW52N05GUDY0MVMyOUozeFRMR19mcDRxbjdmTlEzT25XcmpjU190MjJrUHhlckh6U1l4WWdReVZ2NU5BMkl3TWkwVVd1LW5iWktveGtNcndQT2ZiUHVuZUNiNFgyQWVqWW44Mi1PM3BLTVZqazVtUkVaT2ZTaV9qc3BlcXhBTkFpRVFGd0hnYUdEYW41LTh4Q0h3ZmJzU0NJZGIweEhEWV9wQk0zUjZwSVlLcEdTQzhPb3N4a0xRRjlaVXJB?oc=5" target="_blank">How to Test an OpenAI Model Against Single-Turn Adversarial Attacks Using deepteam</a> <font color="#6f6f6f">MarkTechPost</font>
- MeetSafe: enhancing robustness against white-box adversarial examples - Frontiers— Frontiers
<a href="https://news.google.com/rss/articles/CBMimgFBVV95cUxNbWcwTXRXX3RvUG9xOEZ2Tm5FalhQN3VzeFgwLWVyc2FZSlc5WVp0WGw2UUxrMEFKX2syZVVxZVFMdWtKbUlycUR6OWd1dzVVbzVmTDl6N2M5QmVPSFgtQVJzWnlXczVKYkJWbmxWX04wMnZhaE1oc0gySFN2SWF1cUY1aHJsWHdxVEJNRmpsU0pwNjFZYnRmY19B?oc=5" target="_blank">MeetSafe: enhancing robustness against white-box adversarial examples</a> <font color="#6f6f6f">Frontiers</font>
- FLAT: Flux-Aware Imperceptible Adversarial Attacks on 3D Point Clouds - springerprofessional.de— springerprofessional.de
<a href="https://news.google.com/rss/articles/CBMisgFBVV95cUxOSmZZODRTc3RGZmpFVE03dUctd1FoaHNUZDU5SGpTVncyQ2ZWRGIxaVJSZUZKN2VRY3Ftdk1SMFFiZ2RoZU5rTy1hOUZxTmtiRm1oM1BXekp2eWdjXzF6cnRNRTItRkdCaXBIRURpRFNLcExBQ1EySUFBYnRSMkQ2Rkh4cjAzSzdoc1I4VzFyc3Q2TV9rT1RnUzNmcHFEQWEtaW5JWG5iY0pRcVN1c19fMTJ3?oc=5" target="_blank">FLAT: Flux-Aware Imperceptible Adversarial Attacks on 3D Point Clouds</a> <font color="#6f6f6f">springerprofessional.de</font>
- Topological approach detects adversarial attacks in multimodal AI systems - Tech Xplore— Tech Xplore
<a href="https://news.google.com/rss/articles/CBMikAFBVV95cUxOaXVYNzhiMnRJcmJPQjItME91enRhbkRtYVU1eVBHRWx6ZEhNVnBaRDV4em5vVTVfcU1LNHVSZHFyU2Q0VXAzcmJqeV9hSjNEd0ZURlplUVl5TVBYeUREQ0NGOVpuNms0RVp0Zy1ubFVMZWFsSWttS1ZVbGV3ZWpORmRuZ0dHZ0p6TDVaRzZoVG8?oc=5" target="_blank">Topological approach detects adversarial attacks in multimodal AI systems</a> <font color="#6f6f6f">Tech Xplore</font>
- Multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support | Communications Medicine - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE15b0kwRWczX1JKRy1CQzZiSUk4dVFuSjVMbzVkSVFKNDkxS1EyMTdacDl1OUZlNzd2MEo1MjVGTXJpckFLdzNxNVcxb3NMVnRuVF9TNG9ReDk1MVBGMERv?oc=5" target="_blank">Multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support | Communications Medicine</a> <font color="#6f6f6f">Nature</font>
- Darwinium Launches AI Tools to Identify Adversarial Fraud - Finovate— Finovate
<a href="https://news.google.com/rss/articles/CBMihgFBVV95cUxPdXM3NTRqOFFWNE5QbG1aMGZUemFpekdjSzlweXAzd0ZiVmNva0ZJUThmNXItS190R2RyTXY4aFdhMVBER19iZjNWRGRvZU1MblpRRGpEblZVVWs4WFdoOUtZWnlEbUc2Z2hldkpmbEhpVTB3eXNxWFMtSmhxV1dzYjNDR3FoQQ?oc=5" target="_blank">Darwinium Launches AI Tools to Identify Adversarial Fraud</a> <font color="#6f6f6f">Finovate</font>
- New manipulative attack method gives hackers control over what AI sees - Cybernews— Cybernews
<a href="https://news.google.com/rss/articles/CBMidEFVX3lxTE5tUDc5eHBvbEswRmxzRjRUekJ2cVdYNlR5UldpcWVOQ0hrSWJVaFY1alB4elk2cFRpOVpKbTI4T0M1YlB0ckkwd3Vzck5ULWF3TkVPaW1YbkN5b2NNTlRudS1kVk51dEh6UElGUnYwSnM2R04y?oc=5" target="_blank">New manipulative attack method gives hackers control over what AI sees</a> <font color="#6f6f6f">Cybernews</font>
- Learning atomic forces from uncertainty-calibrated adversarial attacks - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE9vcUZHVkpVSFJEdHhack5hbUd6WjJabDY0YWFTSDRsSmMxajNEdnVWS1AzU0lwVjRYb2pibzg2M0NzMENuWkFnTTJMdlBwR2hlMzVpUW9aOEphQmFOTmF3?oc=5" target="_blank">Learning atomic forces from uncertainty-calibrated adversarial attacks</a> <font color="#6f6f6f">Nature</font>
- Gradual poisoning of a chest x-ray convolutional neural network with an adversarial attack and AI explainability methods - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBMNkNneUVKTzBHc0lmZ21zVkt4dXZ1SnpHN21nMmI4VmVIalcwekszUG9iNm4xWFhQWEpYakJZaXI5QzJEd0hZX2N4Zlk1UlVpUjY3eXNQeDZ0bWlackRr?oc=5" target="_blank">Gradual poisoning of a chest x-ray convolutional neural network with an adversarial attack and AI explainability methods</a> <font color="#6f6f6f">Nature</font>
- RisingAttacK: New technique can make AI 'see' whatever you want - Tech Xplore— Tech Xplore
<a href="https://news.google.com/rss/articles/CBMidEFVX3lxTE9oYkFreG5mME5xTV9vbXA0U1hoVThNN29oV05HNGZTVEZYT0pZTnZEMWg3cGRpYXNSbUpFZnVRcTNFMnpIbzBwRF9WVzNOTUpGckx3M2FDd3dFLVAtNGRoeHA4aFlZM3FMUDFLajZKZE1fY2pv?oc=5" target="_blank">RisingAttacK: New technique can make AI 'see' whatever you want</a> <font color="#6f6f6f">Tech Xplore</font>
- Assessing the adversarial robustness of multimodal medical AI systems: insights into vulnerabilities and modality interactions - Frontiers— Frontiers
<a href="https://news.google.com/rss/articles/CBMijgFBVV95cUxQNXhhRU1TWjRWSmlpSnJCRzBxanZqbXlMaVlZNHplVVgwME9Gb0xFbFViRXByUENyYjZic19ia090U0xvaW5hLWs4Mkk0aHFaNXJTc1h6MV9HNlIwYVd1X3lqbW5GdmE2Nzc4WnJnUnRWSFc4UTA2dC1VeWJDN1VSV2NXWXd2b08yTFljQmtn?oc=5" target="_blank">Assessing the adversarial robustness of multimodal medical AI systems: insights into vulnerabilities and modality interactions</a> <font color="#6f6f6f">Frontiers</font>
- Data Poisoning: Current Trends and Recommended Defense Strategies - wiz.io— wiz.io
<a href="https://news.google.com/rss/articles/CBMiY0FVX3lxTE14b3YyTDVYNmlqeHVQWlI3WkhaMHRzUzdhMzBwQ2psT1ppSGE2LVo5SnYwUko2MjlFdW9QejgwXzFZcTRsdVdOT0oteFNaTXRSUG9ncVRRSEwxeldjbi1mWnVrWQ?oc=5" target="_blank">Data Poisoning: Current Trends and Recommended Defense Strategies</a> <font color="#6f6f6f">wiz.io</font>
- Mobile applications for skin cancer detection are vulnerable to physical camera-based adversarial attacks - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE85OXNJLUlOenJoYV9TNHZsSmZBNDJNQ09ZdFhDbDVRRS1VVWI0bENKSGQ5dVdLaHBNVEhYRW5WVkhZaWtLRGJxWnh1S2VEcDBPNlEzenFod3dreGZucTVB?oc=5" target="_blank">Mobile applications for skin cancer detection are vulnerable to physical camera-based adversarial attacks</a> <font color="#6f6f6f">Nature</font>
- Efficient black-box attack with surrogate models and multiple universal adversarial perturbations - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1iUVdQQVljZEhhSXBEWkNuTDBDUDVGb2c0am5NbUxlZVhtRWlfT3VsYTExMjZBVENMQVRka1BjOFR2cW1TRFNKcXZWMjZpRGx2X3BXODhCWjY4Z0JjeXRZ?oc=5" target="_blank">Efficient black-box attack with surrogate models and multiple universal adversarial perturbations</a> <font color="#6f6f6f">Nature</font>
- A multi-layered defense against adversarial attacks in brain tumor classification using ensemble adversarial training and feature squeezing - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE5wZ0ZLdllyano5dnp4dG90d04xUWE2UmNGaU8wRU9KWU5JTXVIWnc0OE1uRjZRN3hWcENqbGtKQlN3T1hpaUZHZno2WTQ4WHJfVlRSbmxQbjVEa0dfWG5z?oc=5" target="_blank">A multi-layered defense against adversarial attacks in brain tumor classification using ensemble adversarial training and feature squeezing</a> <font color="#6f6f6f">Nature</font>
- Adversarial Prompting: AI’s Security Guard - Appen— Appen
<a href="https://news.google.com/rss/articles/CBMiXEFVX3lxTFBTbUdqSEtoMEhMaHJGTldVUmRoelB3bkVRNHlNNVNfV1QwREloSVdHUEMzb1dJSkZXR1RRWFFhVWhjdW9KZ1VMdWIwTGtuTWRYNjZYWTNhVW9TZk9a?oc=5" target="_blank">Adversarial Prompting: AI’s Security Guard</a> <font color="#6f6f6f">Appen</font>
- An enhanced ensemble defense framework for boosting adversarial robustness of intrusion detection systems - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE9fc3lEcmQwQkstSWJjTjdPOG5nWE53cHZXMlhtOFp2amRkd3U1c3I5Nk5pc2U0MmZlMUpBaFhFN1RYQnV3bm1lbHRwWUw0bG4yeFVvUjFoM3VHNlMzLUs4?oc=5" target="_blank">An enhanced ensemble defense framework for boosting adversarial robustness of intrusion detection systems</a> <font color="#6f6f6f">Nature</font>
- Defending against and generating adversarial examples together with generative adversarial networks - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBVNWl1ZGMxdVJDb2FVcFVoc1psbjNsRkRsN3duN1FVTnB6YXlwdDlnWGlBNmdLT2FUWUZ6ODJMRUZOdXE2bzNUenR0M2V3dnRrTGpMVUNVTnl6RXRCMFc0?oc=5" target="_blank">Defending against and generating adversarial examples together with generative adversarial networks</a> <font color="#6f6f6f">Nature</font>
- GEAAD: generating evasive adversarial attacks against android malware defense - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE93b2k2MTZWazFYcmN4YnpRVW1vNVBtdkh5alEwRGVUcXZjcjQ2WWNYT081bXdlQzVCUEFLVlVKZUVEMFhTWWZBT2pQM090S01Bb3BRc21SVEtXTGpVZ3VV?oc=5" target="_blank">GEAAD: generating evasive adversarial attacks against android malware defense</a> <font color="#6f6f6f">Nature</font>
- Tailoring adversarial attacks on deep neural networks for targeted class manipulation using DeepFool algorithm - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE9Bbnl1ek5JbnZyNk91N2pKWWx6QlpKWklHTmZDYnU4eThVcU1kZE82a1U5MzdIT2JSZnlrZjJnamtreW5QOWdzMEh3V1UzLU9LWWxTMU94RW12RVR1R2RV?oc=5" target="_blank">Tailoring adversarial attacks on deep neural networks for targeted class manipulation using DeepFool algorithm</a> <font color="#6f6f6f">Nature</font>
- Hard label adversarial attack with high query efficiency against NLP models - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE92Zm9IU3hYM2lUbGdBejNOZjlzT0ZzS0w3ZGpKbGEtb0pJcHNkVzhZMFNMamxrUEZWTEZfa2hkQmpXRC1xc2ZHMHJSNy0xdURTNFZsRElMbTl1QnVJeXY0?oc=5" target="_blank">Hard label adversarial attack with high query efficiency against NLP models</a> <font color="#6f6f6f">Nature</font>
- Mitigating opinion polarization in social networks using adversarial attacks - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE0tU0sxUjFtZ21FMEdEamdaQ2NLRVRyS2VXVmJhdTBJRTdQdVpWbEswMTB4X0pSdmRmaWhUUUpXSkRheHZGWXFYR2FKQWRGaFNFNXphNHlRQU05TlBSZW5Z?oc=5" target="_blank">Mitigating opinion polarization in social networks using adversarial attacks</a> <font color="#6f6f6f">Nature</font>
- New AI Defense Method Shields Models From Adversarial Attacks | Newswise - Newswise— Newswise
<a href="https://news.google.com/rss/articles/CBMimwFBVV95cUxPV1p0LUF0MkYtRDZKekE1U05TcmZOTi1SeGhHYzRXckNLS0dscng0cU0xTnIydEJxN0RFY2MzM0dSbzJYU0ZJREtKZU0xY3NjLWtLUTF3eExLMmNnalJFc3dNMjB5cWp1SXJOR2piajktMFZEc2gtWDNmSDl0WHBlQlZwZmdqaDVRWVhJQkd3N0ZicHI5RGFlZHNnWdIBmwFBVV95cUxPV1p0LUF0MkYtRDZKekE1U05TcmZOTi1SeGhHYzRXckNLS0dscng0cU0xTnIydEJxN0RFY2MzM0dSbzJYU0ZJREtKZU0xY3NjLWtLUTF3eExLMmNnalJFc3dNMjB5cWp1SXJOR2piajktMFZEc2gtWDNmSDl0WHBlQlZwZmdqaDVRWVhJQkd3N0ZicHI5RGFlZHNnWQ?oc=5" target="_blank">New AI Defense Method Shields Models From Adversarial Attacks | Newswise</a> <font color="#6f6f6f">Newswise</font>
- The inherent adversarial robustness of analog in-memory computing - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE15US04ZGlfb2t1aTN6SHNLOGFja3JNd0FZQURlQ3JMUHJvVmhoR3A1V2hTS1RuMkVBd09DN1ByMTEwSkZIMVFORWZIM3pwMks5UkRxemRLYUlEUTZHR1Yw?oc=5" target="_blank">The inherent adversarial robustness of analog in-memory computing</a> <font color="#6f6f6f">Nature</font>
- Explainability-based adversarial attack on graphs through edge perturbation - ScienceDirect.com— ScienceDirect.com
<a href="https://news.google.com/rss/articles/CBMie0FVX3lxTE5xSXFESUY1Z2U4V08teE9ZRjhiYWxxbzhtdlpZb0hnR0l4VzNIUXNnUFJQVXoyRUZkNzI3YURLWTR0UXhUTW93OUZ0dnZtT3FVdDAyOGNqODdST2JrbDBzZldfRElXYkpJdzJFYjdNZHA1ZFNsamNuR0FRNA?oc=5" target="_blank">Explainability-based adversarial attack on graphs through edge perturbation</a> <font color="#6f6f6f">ScienceDirect.com</font>
- Universal attention guided adversarial defense using feature pyramid and non-local mechanisms - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBWSjBSRjlsM3hvS2xWMmdDMkt0bXJPRlJLaEcxV1BNdE9EZC0xNG84bmRvZ2ctZmhGS09NMnFSenFkTzFSd0xZTzMyYnBjS210aVFzMWIyX01LSnRMQllz?oc=5" target="_blank">Universal attention guided adversarial defense using feature pyramid and non-local mechanisms</a> <font color="#6f6f6f">Nature</font>
- A two-tier optimization strategy for feature selection in robust adversarial attack mitigation on internet of things network security - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE9KR2U0WkozRVlYaXpFVFJkU2hjSGpIMml3SndBVUhnWEJuWEZWMGdHWVNid2JkNERRd1FFODVGRzJvZzNobS1kT2Fpb0JtOFVxV0ZpMDdCLUlFOEM2aC1J?oc=5" target="_blank">A two-tier optimization strategy for feature selection in robust adversarial attack mitigation on internet of things network security</a> <font color="#6f6f6f">Nature</font>
- Exploiting Trusted Systems: How Adversarial Attacks Can Manipulate EPSS - Morphisec— Morphisec
<a href="https://news.google.com/rss/articles/CBMipAFBVV95cUxNRFpKbDV4bU9ORTk5NGdGQ25PdzF0Zmc5RzJ3akgwM0d4dEZjVEphcVNLWVRrX1hFVDVQZXRFNTFIdlVoc0hrWV9yVVM2VkdsM2hvSHZaajgxbm5mZjVmdXNmZ09IVnl2WjQ5QnlJbHJEbEFNb2w4N09nbVFwNlBIaTlNNVFmTFJ6TGEzdFd1Um0yR3VORmFBSzZWemlOU0tnU25YYQ?oc=5" target="_blank">Exploiting Trusted Systems: How Adversarial Attacks Can Manipulate EPSS</a> <font color="#6f6f6f">Morphisec</font>
- Adversarial Attacks in Explainable Machine Learning: A Survey of Threats Against Models and Humans - Wiley Interdisciplinary Reviews— Wiley Interdisciplinary Reviews
<a href="https://news.google.com/rss/articles/CBMickFVX3lxTE5SUmJVY1c5cHVyZFg2eG42emdGZUZCaUd4bjRBbmp5M2Q4YU5WVG9wWGJ3U1JmU05Zdk9NNWpNV2JwcTFVQ0VkVTlycXNGLXNYSnk3R3BQTWFFN0FCbnRMMFEzUVpCLVFXdWltRlhYNTl6UQ?oc=5" target="_blank">Adversarial Attacks in Explainable Machine Learning: A Survey of Threats Against Models and Humans</a> <font color="#6f6f6f">Wiley Interdisciplinary Reviews</font>
- Medical large language models are susceptible to targeted misinformation attacks | npj Digital Medicine - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE92Y0hkcWt5SXhGdm9QX2U1Y09kZE8wZUZsZ3UtRU9waGpXcHAzQlJCZDRJUVRyTFhFUFIwSWVSajFBNVFuaE1IZjhhUy1sMnNDZTJkM1o2UFVWY1QzWV9j?oc=5" target="_blank">Medical large language models are susceptible to targeted misinformation attacks | npj Digital Medicine</a> <font color="#6f6f6f">Nature</font>
- Adversarial attacks on AI models are rising: what should you do now? - VentureBeat— VentureBeat
<a href="https://news.google.com/rss/articles/CBMioAFBVV95cUxOR3plSFM0cW5RVjlRVnd6ZWUyd3FSQjc5R0R2Y3dOcjdaRFJmOHlvanVRUjJ6ZVc5QXBBS0hibU9vejJVYndDbHVzSmNfRnRFQUhUUjBfMlZRQ25GNGZRSU5vUWVyTHlGekxFY2FJU083MHVGMUdEbjhZTU1TQjRINjRTSWswU3E4NVJBdGUtLXZNQnBWWGVEbGItR3E5WUFX?oc=5" target="_blank">Adversarial attacks on AI models are rising: what should you do now?</a> <font color="#6f6f6f">VentureBeat</font>
- Adversarial attacks on neural network policies - OpenAI— OpenAI
<a href="https://news.google.com/rss/articles/CBMifEFVX3lxTE9vMEJ5eDNIZXV5QWFSQmVqaHNham1uaEN6Z3VGZDZ5UkRSQ18za3E3N0U4WnNXaGUtRmVkM0paVVdIMjU3VzkxWWdGVE5LeGlHM3VNOVk4ZVZfUGd4RXliQTlJMklOX1k2U2lqNkFmLTA5R3F6bnRueHhiVlI?oc=5" target="_blank">Adversarial attacks on neural network policies</a> <font color="#6f6f6f">OpenAI</font>
- Safeguarding AI: A Policymaker’s Primer on Adversarial Machine Learning Threats - R Street Institute— R Street Institute
<a href="https://news.google.com/rss/articles/CBMiswFBVV95cUxNSGc2QzVOTlRVSjRyd3o3SkI2VHJhSDZ5U3M4Z0dOMUJKWXR3ekhVUHUxSXk3Z3dUTDJJd2tEaERZbTU4N09ZVUdIT0lXMmZIckV4TWh5QW4xZTdKYjZrVDhhN3k3cjNBOTgtZmlxQjR0T0F6RWNua0VvNlZ5RF9fMU1rbGRvR212TTJYWVJBVE1ndEZBTWxSTC10a0NBMUw4bmtyMTIyNlZWd0FLY1R5RjZrcw?oc=5" target="_blank">Safeguarding AI: A Policymaker’s Primer on Adversarial Machine Learning Threats</a> <font color="#6f6f6f">R Street Institute</font>
- Defense against adversarial attacks: robust and efficient compressed optimized neural networks - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE9jY1FseHdISmx3UFN1MlRiYUlaU0xhRTlORk5XN3NmMWRCdEEwZG95R3hUOEJucm1yT3AtZk1Bek9HZEw0bnZNVVBZa1MxRDJBUFpSNFh1VlFaOFgzN1dz?oc=5" target="_blank">Defense against adversarial attacks: robust and efficient compressed optimized neural networks</a> <font color="#6f6f6f">Nature</font>
- Securing AI from adversarial attacks in the current landscape - Infosys— Infosys
<a href="https://news.google.com/rss/articles/CBMigwFBVV95cUxONDlNQVgzVlpxYmtwLUNvMG8zN1I0RFhOQVR3eUNMSmpLeUVzQWJzMDN6N3pDNXBsYWJTLWEwUU1VMUhWaVYxUXdyV2EzTGZCakhJRjI0SnFEZXdDaWpRZWFib2EzN1FQNGhSRE8xWDhYbWZFWlQxVFBXT0RTWHd0NTRUTQ?oc=5" target="_blank">Securing AI from adversarial attacks in the current landscape</a> <font color="#6f6f6f">Infosys</font>
- NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems - National Institute of Standards and Technology (.gov)— National Institute of Standards and Technology (.gov)
<a href="https://news.google.com/rss/articles/CBMisAFBVV95cUxPRzhVQUVvZFhaLXZINTVWbU12cW1EVWFfempHdjZhQWxjSGdzNFgwLXE2ZXpDb1UtTHI4YnV2UXVLUXJZa2NSck90UnJuLUtia01NVnVfRjBpODQ5NVZYZkgxZVJ5OUdhQ2MtRU9kVjhxd1hreEhicmE0X29XcHczek9xOFVsWWhmYVp1amxHM0VPVkV0cERMdW1FeEJDRGMwRlRWOEx0RURtS2x1XzBsLQ?oc=5" target="_blank">NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems</a> <font color="#6f6f6f">National Institute of Standards and Technology (.gov)</font>
- How to harden machine learning models against adversarial attacks - ReversingLabs— ReversingLabs
<a href="https://news.google.com/rss/articles/CBMijwFBVV95cUxNbzhSbm40VGIwd042bHV1R09LOHo3ejFweTZxU2dvcUZBNk1XVUlaZXV5T3hiOERUaloxM19TeEZsbWcxTTRPbjBQT1JDOC1ldGpWTFZZM2V3R0FJSUhBaGFGdi1oNldoSmh6bzdMb1h3RGtqWmRUS3Z1SnExVThzdVdKYU9mbEt6VnVRTElpaw?oc=5" target="_blank">How to harden machine learning models against adversarial attacks</a> <font color="#6f6f6f">ReversingLabs</font>
- Adversarial attacks and adversarial robustness in computational pathology - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1McElzRU9JSEVTVUNJaUw3YllpTm01MjVQeV94Z3hjT0EyZTdZcVFIZ1lHRnJiUUxaM0JxQVVIazY3LVVUWHRtT0hGRVdVNHNZbU9mMUdzYTdSNHBsTVJz?oc=5" target="_blank">Adversarial attacks and adversarial robustness in computational pathology</a> <font color="#6f6f6f">Nature</font>
- Adversarial attacks on stock prediction models via Twitter - IBM Research— IBM Research
<a href="https://news.google.com/rss/articles/CBMic0FVX3lxTE02akViQlF6TDB2UnNEZmh6NUl2R1N1ZzFTTk9jMC0tZ0swOU9VZ1dRcXQ3d3pFZUR6andNM2tYMk1RdFRnaldEalphQnlWVlA1aUpsVkV5X216cW45cFNnb0pUZDhLdnhHYk5CemJ2cG1UU3c?oc=5" target="_blank">Adversarial attacks on stock prediction models via Twitter</a> <font color="#6f6f6f">IBM Research</font>
- Adversarial Machine Learning Poses a New Threat to National Security - AFCEA International— AFCEA International
<a href="https://news.google.com/rss/articles/CBMirgFBVV95cUxNdWpCTVAwWk54MDFqRXZGZjZMc3hvU0V2X0UzSF9xRms4V2lBZnhYU3FKWWpxZkRnN2M2Q2I0bEoxaW5zd1Q0OWRZNXVYTWdQWERSY01qbzl3NmFMWmJaWk83SjVzSTUyVlJ5MFlSNzJ3OGdwUHZHWWdKaHFiTEZ0ekNtd2tiZFE1eUVrcUZxSDhyTWtiMG0xQWpfWnFENGxwNkZpdk5fRWd0aW84SEE?oc=5" target="_blank">Adversarial Machine Learning Poses a New Threat to National Security</a> <font color="#6f6f6f">AFCEA International</font>
- Protecting computer vision from adversarial attacks - University of California, Riverside— University of California, Riverside
<a href="https://news.google.com/rss/articles/CBMikAFBVV95cUxPQnlaZ1NvMXZrTG9YVVZCb29iSFBHbG9hUzNKa0VvbEVQVllBaFR2SXZDbkIzVWNkUzNKV29zbVRHT0laZkdibjZlTWVVV0U0YWl0REwzMzc4S0FOZkd1Sml3QnJBaUhGeUZHNXZ1Mml1VHpVcFZXY01POU9RNWxsYnV6d2Zkb3BvTFlPOGZyNFo?oc=5" target="_blank">Protecting computer vision from adversarial attacks</a> <font color="#6f6f6f">University of California, Riverside</font>
- What is AI adversarial robustness? - IBM Research— IBM Research
<a href="https://news.google.com/rss/articles/CBMihgFBVV95cUxOX1dTaUJBQ0UyLVdlTmNUbXZCNkxaQXEtNFZqZ2NjWEYydFVvYWRzYWdfMWh1cFpQUWZkdldjUGV6YkFoRzZQUUhVYUFZbzZjTUtqMnFxUGNFYTJLWmlwS0UwZWdIUHBMSnQ5MFE4LXFhUkE4dTFpY1N5RUtHbXFLaHVjV29ndw?oc=5" target="_blank">What is AI adversarial robustness?</a> <font color="#6f6f6f">IBM Research</font>
- A turtle—or a rifle? Hackers easily fool AIs into seeing the wrong thing - Science | AAAS— Science | AAAS
<a href="https://news.google.com/rss/articles/CBMinwFBVV95cUxPd0hiSHNlRlpzaGpEemhwZ1NFMi14UkRjNEdWUGRCSWRzYUt0SGRuSzB2a0NVclFmVWlxVGlaa2VDb29zUGdmNXpYeE5YWFlzTUp6VGtLRUVFVjEwZWRZelFDUG9oaFZ4bnVOYU9NT2cxYmozYnMzQmZXaUp3NnFJdkZHM0J0VHd3M1cxRnhySU5JaTJQUERPN2RQWWU5ZlU?oc=5" target="_blank">A turtle—or a rifle? Hackers easily fool AIs into seeing the wrong thing</a> <font color="#6f6f6f">Science | AAAS</font>
- Using adversarial attacks to refine molecular energy predictions - MIT News— MIT News
<a href="https://news.google.com/rss/articles/CBMilwFBVV95cUxPcXU5N0VwRTViQXhoVnlnanhoOGgyb3dWdV9zcjJQQjFhM21ad0wxSTRYT0h3RmwwdktMRWpaN1pVX0VZUG9QOG00WGJwUFRGSEVjZUMyYVUyV1RwN3NfczVDQ29panBLQXk2ZnhidHFpNWkzTFpKR0xzMFBKUE1QODlkZFI2VC1hb1VfN29vUVlWM1dPcnM4?oc=5" target="_blank">Using adversarial attacks to refine molecular energy predictions</a> <font color="#6f6f6f">MIT News</font>
- Differentiable sampling of molecular geometries with uncertainty-based adversarial attacks - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1sbWxHSmVSNzRxUVpvSjc4aEdOSjI3RGlGR0pTcm81RmlqNVB2SUJjdjhlSDRhT3UyV193ZWx5UkVKSXA0NXVGRkQ0aUtNZHIwWGZ1OXZYdkhaZGlsS05F?oc=5" target="_blank">Differentiable sampling of molecular geometries with uncertainty-based adversarial attacks</a> <font color="#6f6f6f">Nature</font>
- How Adversarial Attacks Could Destabilize Military AI Systems - Center for a New American Security | CNAS— Center for a New American Security | CNAS
<a href="https://news.google.com/rss/articles/CBMiqgFBVV95cUxNX0NzaHJPOV9CWU0tZ0Y0NWhUODN0eEMyWTQ2UUlnZlRxVnBZR1Rkd3hoQk8xZU9OTlZQWnlHQXZoSmtsZFM0MUptdHRuVUFVVWREVEtBVzhhVC1xUjl4YjluaEdUOC0tRmppcndsU19ZVXNVbGtTQmdUOWp0QmhDb1Q1cXBtMkpJRDY1V3VSU2ZhTG5oR3ZRWndnTDVyaGlzTml2Q0Q4ZmlOZw?oc=5" target="_blank">How Adversarial Attacks Could Destabilize Military AI Systems</a> <font color="#6f6f6f">Center for a New American Security | CNAS</font>
- How Adversarial Attacks Could Destabilize Military AI Systems - IEEE Spectrum— IEEE Spectrum
<a href="https://news.google.com/rss/articles/CBMibEFVX3lxTE1Sd2JnWWxpTGxib1lKUGxiaGVnaUN4VTZtcENXaGlud2hBUlZSbUdrWmhvelFSYllTZ1ctaEMyeWZfLUwwMTN4bGk0a1pfUHpIeExkMGpwOFR3UkpMNGZkbnA5czNKRlphTlJYSNIBgAFBVV95cUxNUzM3MWt0akN0T1BUY3F4YWMwVjZ6bUpJUi1SUnlyUHRBQXdnazRFVXlUN19qY3o2ZmpES0RDUEw0S2xyMUZZcmt3Z0pxMmNvbG5MZ3drTENQRmkxSloxR0p3UXRqN1A4dDhzWnFPel8yNkREUUlVRzJPZVRKMTd5ag?oc=5" target="_blank">How Adversarial Attacks Could Destabilize Military AI Systems</a> <font color="#6f6f6f">IEEE Spectrum</font>