- Adversarial Examples in Computer Vision Guide - Blockchain Council— Blockchain Council
<a href="https://news.google.com/rss/articles/CBMijwFBVV95cUxOX2xzNlNvVS1ZbUxScWl2NGNuZ3hQcEZ6TF84cnlTemU1VTZDNkRMQjdFNkRrejU3WXoyNXJxRWtmS3h6T2hLY3BkLWlFcWRiSmMxWUFORjB2aUUyZ2wyZGJfRy1QbnpsRDlQMzQ3STdKVE5hTmhjRG15YS01UXMxazRiNHFCTGZmdzNMMEdBVQ?oc=5" target="_blank">Adversarial Examples in Computer Vision Guide</a> <font color="#6f6f6f">Blockchain Council</font>
- Attentional semantic attack for enhancing adversarial samples transferability - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE5sdnZwcnR4SmNqRGFJRTlQUDFlNkVuT3daa29UYUhmTlNTNjVoWS1BbUZmSmRwOG85bGRMOHh1SnRVWDJ5NUNSUE5oZWxaNzJUMUFvZ2wzSDVNWjJDSlVr?oc=5" target="_blank">Attentional semantic attack for enhancing adversarial samples transferability</a> <font color="#6f6f6f">Nature</font>
- Bryan Tuck’s Research Tackles Hidden Vulnerabilities in Artificial Intelligence - University of Houston— University of Houston
<a href="https://news.google.com/rss/articles/CBMigAFBVV95cUxQbDl2Sk9uNUVQMV9tcVhlOV9hQ04yYmZJd0tkcUk0WEd1TmJaWVZTN0N0dHpvTDJMVzk1OGJfOWNXMXM4X19JM1BBdzNrZm9uWFNuOG5wS1lObHc0QnVoeXpSY0REb1dRcVhaN25HbC15elZNcll2VWlwT3ZDNVFmZg?oc=5" target="_blank">Bryan Tuck’s Research Tackles Hidden Vulnerabilities in Artificial Intelligence</a> <font color="#6f6f6f">University of Houston</font>
- Adversarial AI reveals mechanisms and treatments for disorders of consciousness - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE9CLXRYbkZDWFQ0eGppdEFSQl9DQldfcWJ1QzM2czNhbnNZOWN0N2t3QVVuclcxZTRxMDdWWXJYbk5FWS15QXVTcWowYS1CaWNwOEVCWGpvTEFpRDJGMnpz?oc=5" target="_blank">Adversarial AI reveals mechanisms and treatments for disorders of consciousness</a> <font color="#6f6f6f">Nature</font>
- Self-purification: Enhancing adversarial defense by leveraging local relative robustness - ScienceDirect.com— ScienceDirect.com
<a href="https://news.google.com/rss/articles/CBMie0FVX3lxTFBGczkxdjZYdGhrSW1iWFd6RklJWVV2dkZ2T3BaUTB2N2NNdDlHV0JHU3FNUHVmZkUtR3M5Q1Z3emNXeF9XbFZwOEJ4c3BWV09kdVFHRldMWXMyWXE5bFhVemlEUEJ1dlZKN0Uzdi1ZMTBlcWN1T3BnTmVVdw?oc=5" target="_blank">Self-purification: Enhancing adversarial defense by leveraging local relative robustness</a> <font color="#6f6f6f">ScienceDirect.com</font>
- NIST Finalizes Cyber Attack Guidance for Adversarial Machine Learning - Hunton Andrews Kurth LLP— Hunton Andrews Kurth LLP
<a href="https://news.google.com/rss/articles/CBMixgFBVV95cUxQNkNFcjBiS2lIMHJYcVR0NDZaR1BvVDdmQ1NFQVMydmNCd09QVVNBQ09ZN3oteTRHS3B3Mm9yamxFOG1NTlpoeVBWSFkta3NxRTFrTG5PNV9DSF81ZG9Uc3hyb3Q5NVJDa2JuRU4xSjJsdDcxQjJfMlJqd0dxN0toZ0lUMlhmR3p0eU9sODhPTWJVZWs4VjVqTG9sVi1mTWxlNTBXX3VxczdzWHNJMHgxWF9QUjhCelVnYldpdG16U2d0M19Ecnc?oc=5" target="_blank">NIST Finalizes Cyber Attack Guidance for Adversarial Machine Learning</a> <font color="#6f6f6f">Hunton Andrews Kurth LLP</font>
- Query-efficient decision-based adversarial attack with low query budget - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1vek9RT044QkVKejZJSjBvZkJkSGFKMlhVTW82V05FQjFHTENsZGkxQkRlV2FFWnVNd1EwSkY5VHdBSXB6V2xrM1VkV1kzTFNQVEc0OGw2dXg0OU11ci1R?oc=5" target="_blank">Query-efficient decision-based adversarial attack with low query budget</a> <font color="#6f6f6f">Nature</font>
- Adversarial robust EEG-based brain–computer interfaces using a hierarchical convolutional neural network - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE56UTZtY1Q2anZQb0k1R3BKZUNTUzU0aE96RUF1NEh2RkpzUzl5YW1uZlBRWVQ3VFBZbEtDYTl0eU8xRmswdDh5WXhxSDYtTEhVM2V0cG51Z3JRckE1U0h3?oc=5" target="_blank">Adversarial robust EEG-based brain–computer interfaces using a hierarchical convolutional neural network</a> <font color="#6f6f6f">Nature</font>
- Evaluating gait system vulnerabilities through PPO and GAN-generated adversarial attacks - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE0xWDhVbEZDendqMzNad1lFQW92Rnk4ZmFLNTMzdW9pQ2dWLXJ6N2MxeE1uWGNMRFh2STZSSl9iQUtZMTI4bFl6ZHdaN3dWdm5mNFV4ai1kYkRqbWZ3dEVJ?oc=5" target="_blank">Evaluating gait system vulnerabilities through PPO and GAN-generated adversarial attacks</a> <font color="#6f6f6f">Nature</font>
- Stress-testing AI vision systems: Rethinking how adversarial images are generated - Tech Xplore— Tech Xplore
<a href="https://news.google.com/rss/articles/CBMihwFBVV95cUxOeEFHbHNrNTJ5SEJLcFJDMlBaanE3UHdWY1BicjVQX05qQk1uem1BeUN1akdQRXNDejhPMC1jSU1jWkk5Y0llU0JsTExrcFBLS1lfemdlVERmRzBPODBiTkE2Y1NKOC13TmZVLTJzT0lnNE1FUlpFYmRrX1FpcjJGU3ZVaFIwR1E?oc=5" target="_blank">Stress-testing AI vision systems: Rethinking how adversarial images are generated</a> <font color="#6f6f6f">Tech Xplore</font>
- Adversarial robustness guarantees for quantum classifiers - npj Quantum Information - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1TWFoxTlJ6VHVwQy11SVZLR3ZUVTJDQkV0TTkzNjhfOC1GeUZockZUYmlxdTRWNnpEYWlYQmh4cWZ2OEhWZmZ4VFdnbXozWGo4UU84UTZJOHVMc3NaVElJ?oc=5" target="_blank">Adversarial robustness guarantees for quantum classifiers - npj Quantum Information</a> <font color="#6f6f6f">Nature</font>
- Optimized CatBoost machine learning (OCML) for DDoS detection in cloud virtual machines with time-series and adversarial robustness - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBCUTdfQkctSjhJaVlLOU5lYkNnd3dLM3VhRExZOExsZWo1X1F4WXdiRE1yOVM4RHlrSUZEZjcxMUNIdl9SeWxmSVJWdHhfXzY3aHhMdmo4X1dEbU1Mbk84?oc=5" target="_blank">Optimized CatBoost machine learning (OCML) for DDoS detection in cloud virtual machines with time-series and adversarial robustness</a> <font color="#6f6f6f">Nature</font>
- Adversarial AI preparedness in defense and national security - Guidehouse— Guidehouse
<a href="https://news.google.com/rss/articles/CBMijgFBVV95cUxQcFVYNDBOMVZIRUV4U0xoVWNuQ28zOXcyVG8xLWNpaGNRLW5kUlI2TWFOZGZ3NnhXekxTcWNmdjRROW8zcGMtMkFVV2JiNjdFOVFrQVpnYi1XcDMwLWNlT1I0V1RkWjhZZG52VGQ4Ul92YWF1aDVwV1lKUUdQSWdFNVZ0ZF9QRUdmeHVBMndR?oc=5" target="_blank">Adversarial AI preparedness in defense and national security</a> <font color="#6f6f6f">Guidehouse</font>
- Dual-targeted adversarial noise for 3D point cloud classification model - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE5zQ1J0YjQyOTlmelpfZkJ0cUhqVVlIM3NVcjktYWtZaE92eGpzWG5kWUswYXNVVjZCaU9kaDNSVEZHcy10RERoamcteUVycHBBUHlQYWpmdTduWGVSWWJB?oc=5" target="_blank">Dual-targeted adversarial noise for 3D point cloud classification model</a> <font color="#6f6f6f">Nature</font>
- Adversarial learning breakthrough enables real-time AI security - AI News— AI News
<a href="https://news.google.com/rss/articles/CBMiqAFBVV95cUxOLVlFZlBDdkVZVVBWWHNrM2VKVTVMS1VvS0pyTm9vMjUweVhiWHRIcWZzTzZFNWVabXlNVFNJWU9LYV9kUGJWak1IazFVaWVwRlRUT3VMTk40MjlqN2FzVm9Ea0xSNDhXN0dhTWdmQkZqRk00UGlPRFdGLXhCUlRHVlBIeGkyVUUtR2xqOXN4anllLWVoNk56bjNqRHo3VDlucG1GWWZfTU8?oc=5" target="_blank">Adversarial learning breakthrough enables real-time AI security</a> <font color="#6f6f6f">AI News</font>
- Few-shot cross-domain fault diagnosis via adversarial meta-learning - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBHYzQ3WHRIb1pGUkNJeFpMWlFSd0E4MG5fWENYT3MwWHJ4SmNvMkJLUGFjeW5iU0F0S2NvYkpXSE16dlFpYWZUVzNlR1FSMnZ1djh4blg1cHI4MmNOUjdj?oc=5" target="_blank">Few-shot cross-domain fault diagnosis via adversarial meta-learning</a> <font color="#6f6f6f">Nature</font>
- Segments-aware universal adversarial perturbations purification on 3D point cloud classifiers - Frontiers— Frontiers
<a href="https://news.google.com/rss/articles/CBMimgFBVV95cUxQSTh1UGhid2tZRDNpRDZEbFhnam5IeXRObTk2eEM1WVU2WV9rTHJ0N2hiNElja1dhNUo2NC01TllTaXhpUmpPa0FVdFBXaDUyRzFmaVU4WGFVVVd5MUZ2YjNma3pqQlBjQ1ZQdkxfY201eDMyenFwY29ESHA0bldmeE1tN2hCb3VndHltZ3JZWXMycS1NTnJ1WXFB?oc=5" target="_blank">Segments-aware universal adversarial perturbations purification on 3D point cloud classifiers</a> <font color="#6f6f6f">Frontiers</font>
- Ranking-enhanced anomaly detection using Active Learning-assisted Attention Adversarial Dual AutoEncoder - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE9DNVpsUnJxQi01LVFITUlieDcwdGg4NFFpVGticnRwc0tEaFgzamJyNjhvWkxEVWMxdjNNLWF4YVVWSndEVVRHZzNJZUJSS0dKZm0xQXN2Q3JFdDg4UTIw?oc=5" target="_blank">Ranking-enhanced anomaly detection using Active Learning-assisted Attention Adversarial Dual AutoEncoder</a> <font color="#6f6f6f">Nature</font>
- Hybrid framework for image forgery detection and robustness against adversarial attacks using vision transformer and SVM - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1KclB0ZWN4S1hqdU5VX1YyUnlpSGlETmtpaGRaLVg4aTFEUm5CWmVNcS1EQ2lnejRpRDFGeHpOcVpsQ3RROHIzYmtCMEpKRnp3VGRnWGhOU0ZOcEtnZ0s0?oc=5" target="_blank">Hybrid framework for image forgery detection and robustness against adversarial attacks using vision transformer and SVM</a> <font color="#6f6f6f">Nature</font>
- Voice Deepfakes and Adversarial Attacks Detection - Biometric Update— Biometric Update
<a href="https://news.google.com/rss/articles/CBMikgFBVV95cUxNcWREdEF6T2tQTS1OclJwT0hGSUdTMGVBdFdUSkxfWEx6Nk9Vbi0yN3NVMks4NEs1ZDc0c2FjaGFaSUw3SW1felRXZnRKNmd6TW42Nk05VGtMVE11UWROSWYtNEFkVE9Hb0VNVXc1VDFnbm44TVRLX0VrMUd0NTVKdlBWTWhVMjNkNTZXQ0JPWERrQQ?oc=5" target="_blank">Voice Deepfakes and Adversarial Attacks Detection</a> <font color="#6f6f6f">Biometric Update</font>
- Generative Adversarial Networks for High-Dimensional Item Factor Analysis: A Deep Adversarial Learning Algorithm - Cambridge University Press & Assessment— Cambridge University Press & Assessment
<a href="https://news.google.com/rss/articles/CBMirgJBVV95cUxPMWd2aHNFaXNoNGVzaGZrWWwyaF9QWGJQQjVkZFptMVA5WVpIcEdjVzhfVnhGWUVqM3lWVEZuV1pENlVzQVJpZ1l5blZ1bjBoY0RJUWw0VENxX1JBNTgxbmIwSUc5UU5EWGVGeUxkMlFXZXIwam5HU3o0Q25fYzNCOG4tWFZmbFFiVzNxNzBPTlZwc2I1b0FxYTVsV2toRnJkb1JTOHJwYllDYXl4bjU3VWVsUEhjTHRsaXJrZk5rM0o5OVBlZElRalhQNHN3X2JWSGFVQzBuaTVtTU90NEJmN2FJWWNWU2o5S0JTMEM4aXJyd05ERmNoMHg1Q0RuSGppNHdLSmRscDViTzNXakRoTXVrRE9DVkxkRDNKN3RJVGpYWHNNSWdRemItaUd1QQ?oc=5" target="_blank">Generative Adversarial Networks for High-Dimensional Item Factor Analysis: A Deep Adversarial Learning Algorithm</a> <font color="#6f6f6f">Cambridge University Press & Assessment</font>
- XAI-enhanced Quantum Adversarial Networks Achieve 0.27 RMSE for Galaxy Velocity Dispersion Modeling - Quantum Zeitgeist— Quantum Zeitgeist
<a href="https://news.google.com/rss/articles/CBMisgFBVV95cUxOU3B2MWFsUm10MFJpYUFEb3I4WVZZN2lZdmNMazVESnB0X0VfVmNsckpfNktIWkpQZ3lMWFctcjlmVGRjTDZ5YVo3UGZEZkFvaEhHX2JPSHU1VGtwUEtQT2JmNGwxTUxsMkpSaWwtalMyRkpiWFI5OE1WT2hzOG5jREh1OWo0MDQ1OFAyMjZKUE9pVnZBc2xEdnlRa20tLVhmLU9mMHk3UE9GV1JHTmMtcDRR?oc=5" target="_blank">XAI-enhanced Quantum Adversarial Networks Achieve 0.27 RMSE for Galaxy Velocity Dispersion Modeling</a> <font color="#6f6f6f">Quantum Zeitgeist</font>
- How can you protect against adversarial prompting in generative AI? - eeworldonline.com— eeworldonline.com
<a href="https://news.google.com/rss/articles/CBMingFBVV95cUxOUUhfVW5KNkJUS2tzNkN6MkVkaldtWFJJM0dWTWFZWmNuVE5uX1FNaGg0VTF0VXQ3dWV4NmNHNlhXYzVNSHhEOGI3VFBueExqMF9qUDBzbnJaRERlb0xTOWtzR3dFRWlpLTRpVzNTRWRiRjJLZ1VlLXU0UzM4VVl0YWtGRGktRlBEUjNNLWJOSFV2ckphUFpoa2JBZGdwdw?oc=5" target="_blank">How can you protect against adversarial prompting in generative AI?</a> <font color="#6f6f6f">eeworldonline.com</font>
- An incremental adversarial training method enables timeliness and rapid new knowledge acquisition - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE5vX25meTVQRUxLd2pjWWV5V1FKOW5KbXMtc3pfQkZBaHVlNHp1OGVQRllZS05KREdiOWtlTGZTRjlNdy1xdnBSMjl2dG4wbENZNkhPYV9uT3A4eWo5TFRF?oc=5" target="_blank">An incremental adversarial training method enables timeliness and rapid new knowledge acquisition</a> <font color="#6f6f6f">Nature</font>
- Adaptive consensus optimization in blockchain using reinforcement learning and validation in adversarial environments - Frontiers— Frontiers
<a href="https://news.google.com/rss/articles/CBMiogFBVV95cUxQZTVrSEQ3ZEE4ZkEzOFI2anAyakZtVHZZRGpvVUJNQzZGVWhNd1Y5bVVVV2RaWmxrWG1wcnQ4QXRSUndZQXBycWxlSWh3dzNwb2RaelFEZ29SOWVXd3VHMjhnN2d2aHFfVnBvbVhmLU03ckZ5ZmE3OVZsSXIyUEs5bTBmMERFMTFPRnJKNm1ZVGNnV1J4Qjc5SnhwdXJEWlQyanc?oc=5" target="_blank">Adaptive consensus optimization in blockchain using reinforcement learning and validation in adversarial environments</a> <font color="#6f6f6f">Frontiers</font>
- Adversarial Learning Techniques Test Image Detection Systems | The News Wire | Summer 2019 - Photonics Spectra— Photonics Spectra
<a href="https://news.google.com/rss/articles/CBMijAFBVV95cUxQZW1Mb25DTjV5amhPT1lwb05TRGtLZlg0UGF1VzlqZ3VBMUZfVGVrc3JGQUNKbjlIUzhjQUFpSmh5VEhLTUVGNkE5N1NGRHAzOXoyRGNWS2NTVkJDTkM1b0E5OXUyOWhpVDVfbGtnRGtwWDdXbjU5N3BSVEZFREM1c0tSR3RUNEVYcF9fVA?oc=5" target="_blank">Adversarial Learning Techniques Test Image Detection Systems | The News Wire | Summer 2019</a> <font color="#6f6f6f">Photonics Spectra</font>
- Adversarial natural language processing: overview, challenges, and policy implications - Cambridge University Press & Assessment— Cambridge University Press & Assessment
<a href="https://news.google.com/rss/articles/CBMijAJBVV95cUxQSGNaQVNfRFY1dEFkM1JET09NVFRfaUdlc1pSdGFHdDlUNElzTFJsMVdRYjc1Ympab3RPUXc5RXZzWkY4cXU1akpwR1I5TlBYZEdyb2lZOVhGUlhhcmxQYi1PbGU4VU0yZ2JKN0h3NlZkUUtEajU3UUdqeG9GeFlYS19nb0FTUnNTdk9qMDdmdU1WTHgzMWhMSDhUcTh3WnlyMFp3enZtWDVYYnNRZlh2VEl4dXg5S25mcWZEbnkyamZQM3B1WDBmN2xCREtIdmxDWWJyTDllUm9yOHZjZnNmRzJqeWxOcWZ6cElMTGFEbHpjYlY4b3NDRFhoRTZGT1VpNkFoLXdkdUZhbDc1?oc=5" target="_blank">Adversarial natural language processing: overview, challenges, and policy implications</a> <font color="#6f6f6f">Cambridge University Press & Assessment</font>
- Enhancing Fair Adversarial Training through Identification and Augmentation of Challenging Examples - Bioengineer.org— Bioengineer.org
<a href="https://news.google.com/rss/articles/CBMiwgFBVV95cUxQMEF5Z21QUm5kNFNkTTVVWmZPUjJJblY0UkVRUW1iVWhEOUktTDh6VmV1NVJuVThEa2ZCQW1YVGFjR0JqaVZRNTlqWS0tMFd3TnBKM3l0YVJ5QWc5bzdZR2ExeEdHWXRyaDRMRUNJUVMwcDFJMVczZTczdjN1d2tRUFRSMzYxNlNxb0VJYXEzdGJGT0tycGs4V3EtX0p6cWh0RU9YM1ZmbnlWNXRjQmZBbEdOVFAyWGVlbGlsRTRTU054Zw?oc=5" target="_blank">Enhancing Fair Adversarial Training through Identification and Augmentation of Challenging Examples</a> <font color="#6f6f6f">Bioengineer.org</font>
- Diversity-enhanced reconstruction as plug-in defenders against adversarial perturbations - Frontiers— Frontiers
<a href="https://news.google.com/rss/articles/CBMiogFBVV95cUxOZkFnWXQzd2VxX2E1ZmFrNkFiNWRaY0ZLcDhSYTNXRG1yck9tRW44eG5MTG1kR2lnNHlsRFM0NExUc3FjbUttVzN1eFE2WDZvVFdKcEhReFhvbnpudDllWEdvekVSakJMYnpNdWpKWndBTmRIOGFEcGFya3FtZDF0T2R3blpqMDNvVEY4eFQyelVPSE54WVB6LWRSTFJqNWJDbHc?oc=5" target="_blank">Diversity-enhanced reconstruction as plug-in defenders against adversarial perturbations</a> <font color="#6f6f6f">Frontiers</font>
- U.S. Army Cyber Science Looks to the Edges of Machine Learning - Defense Security Monitor— Defense Security Monitor
<a href="https://news.google.com/rss/articles/CBMisAFBVV95cUxObncyTWNNZXVTZURLd2MxcE1kUWtSc21sN2c2OEpvUmd2VGRKTm9Oa19aWTJ6eWxvUHpja21wWTdKd3dVRjBrb1Q2d2RDY1NObFJScXJFQWhyYTBQXy1scDUxZTZJaFZxSzZsTTZwdk1IeVl5Q1hBVFZEWHlDYm93ZWlURDBlbHRyM3JQRmhKQkFyTkhQSXpxS2Y1TW9NZnk3eDBpWEc1N2s0MVp3UjNFNw?oc=5" target="_blank">U.S. Army Cyber Science Looks to the Edges of Machine Learning</a> <font color="#6f6f6f">Defense Security Monitor</font>
- Identifying significant features in adversarial attack detection framework using federated learning empowered medical IoT network security - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1mR3phTmZGcjA2NXFIVG9pRjh4NHFYUjRBSGhQTjBGSW5TVVEzbF9RdWxEMEg2MHZxdW1sRm5MeGxDci1ZQk0xNmVBcGN3b3lmSjBIUUhJWnYtZ3hfLTNj?oc=5" target="_blank">Identifying significant features in adversarial attack detection framework using federated learning empowered medical IoT network security</a> <font color="#6f6f6f">Nature</font>
- Review: Adversarial AI Attacks, Mitigations, and Defense Strategies - Help Net Security— Help Net Security
<a href="https://news.google.com/rss/articles/CBMirAFBVV95cUxOSnVoR0Jxc01MamgxNmNfY0R0MThIeXFmeDl0YkZXNVdYRS1wOGpZU2QwckdwZWphMlZMVTR1M3V5WnpUeElleGdNQmZVaU50NmFVN2hOQUZhSC0wUFNZdzZwemlSeVV6U1dXOU5RRG9wblBnSDk5eWV1UjFaYkpTaHFBUGN6Sklnc09Tb0d4c1VweXg0eXhaZTRJTjRsQ28wZVFzYUpBU3ZGUnJq?oc=5" target="_blank">Review: Adversarial AI Attacks, Mitigations, and Defense Strategies</a> <font color="#6f6f6f">Help Net Security</font>
- A comprehensive survey of deep face verification systems adversarial attacks and defense strategies - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1DVDhhX1BzblJOX0JFUDRTSDBrdE53Z3l0dzhvdVhrcGVSS1Fpb05pbWNsV3ZjOVVnUTRha1JqUlhBcWVuYVVUcTBQMXp0bGpsYWpjYlp4dFg0NVZfTTRJ?oc=5" target="_blank">A comprehensive survey of deep face verification systems adversarial attacks and defense strategies</a> <font color="#6f6f6f">Nature</font>
- MeetSafe: enhancing robustness against white-box adversarial examples - Frontiers— Frontiers
<a href="https://news.google.com/rss/articles/CBMimgFBVV95cUxNbWcwTXRXX3RvUG9xOEZ2Tm5FalhQN3VzeFgwLWVyc2FZSlc5WVp0WGw2UUxrMEFKX2syZVVxZVFMdWtKbUlycUR6OWd1dzVVbzVmTDl6N2M5QmVPSFgtQVJzWnlXczVKYkJWbmxWX04wMnZhaE1oc0gySFN2SWF1cUY1aHJsWHdxVEJNRmpsU0pwNjFZYnRmY19B?oc=5" target="_blank">MeetSafe: enhancing robustness against white-box adversarial examples</a> <font color="#6f6f6f">Frontiers</font>
- Topological approach detects adversarial attacks in multimodal AI systems - Tech Xplore— Tech Xplore
<a href="https://news.google.com/rss/articles/CBMikAFBVV95cUxOaXVYNzhiMnRJcmJPQjItME91enRhbkRtYVU1eVBHRWx6ZEhNVnBaRDV4em5vVTVfcU1LNHVSZHFyU2Q0VXAzcmJqeV9hSjNEd0ZURlplUVl5TVBYeUREQ0NGOVpuNms0RVp0Zy1ubFVMZWFsSWttS1ZVbGV3ZWpORmRuZ0dHZ0p6TDVaRzZoVG8?oc=5" target="_blank">Topological approach detects adversarial attacks in multimodal AI systems</a> <font color="#6f6f6f">Tech Xplore</font>
- Gradual poisoning of a chest x-ray convolutional neural network with an adversarial attack and AI explainability methods - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBMNkNneUVKTzBHc0lmZ21zVkt4dXZ1SnpHN21nMmI4VmVIalcwekszUG9iNm4xWFhQWEpYakJZaXI5QzJEd0hZX2N4Zlk1UlVpUjY3eXNQeDZ0bWlackRr?oc=5" target="_blank">Gradual poisoning of a chest x-ray convolutional neural network with an adversarial attack and AI explainability methods</a> <font color="#6f6f6f">Nature</font>
- Learning atomic forces from uncertainty-calibrated adversarial attacks - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE9vcUZHVkpVSFJEdHhack5hbUd6WjJabDY0YWFTSDRsSmMxajNEdnVWS1AzU0lwVjRYb2pibzg2M0NzMENuWkFnTTJMdlBwR2hlMzVpUW9aOEphQmFOTmF3?oc=5" target="_blank">Learning atomic forces from uncertainty-calibrated adversarial attacks</a> <font color="#6f6f6f">Nature</font>
- Machine learning based on a generative adversarial tri-model - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBKMmRlRXQ1LXhkT3diRWttNVY1WjVDSzh6SGgwY2YyMS12TkZNX0ItOVZSVGhjZF96Q2JPT19GMmRMSV8wcXVObmRPanZXcmhoeVFiSTVMUHRQMF8xWlIw?oc=5" target="_blank">Machine learning based on a generative adversarial tri-model</a> <font color="#6f6f6f">Nature</font>
- Travel AI is fragile—can adversarial training fix that? - PhocusWire— PhocusWire
<a href="https://news.google.com/rss/articles/CBMic0FVX3lxTFBlNlgwd3cyUjdhdVFyRGFaNWFIelFpanZXOXJVWTlQYzcxN2Q3WVRiTDc5YTNMY2dLMmMzR0phSGZfajd4ZXdyMk9tTWYzcmkzNkd4TmsySEg3em43WjJHMGVMak1ReU1YR1NYM1dLeFI4Ym8?oc=5" target="_blank">Travel AI is fragile—can adversarial training fix that?</a> <font color="#6f6f6f">PhocusWire</font>
- Mobile applications for skin cancer detection are vulnerable to physical camera-based adversarial attacks - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE85OXNJLUlOenJoYV9TNHZsSmZBNDJNQ09ZdFhDbDVRRS1VVWI0bENKSGQ5dVdLaHBNVEhYRW5WVkhZaWtLRGJxWnh1S2VEcDBPNlEzenFod3dreGZucTVB?oc=5" target="_blank">Mobile applications for skin cancer detection are vulnerable to physical camera-based adversarial attacks</a> <font color="#6f6f6f">Nature</font>
- NIST releases new AI attack taxonomy with expanded GenAI section | news | SC Media - SC Media— SC Media
<a href="https://news.google.com/rss/articles/CBMimAFBVV95cUxNNXVldkgyX3FVbkJraDM0Y1R4MEhpcThKazU3Mi1DYUNNa1pYLV9pLWFZWEpNdnFsdXI5eFhKYVFNT2d6dTBKTFlsMk5JbGxzTUF0S0d0MVZnY3RNam1CcGY2VWgtcm9ZbFBhZVpXNUtjZ1k0Zl9KR2tMcm5rdnpENkRhOGFRaXBHT0xrVUhSS1lvOU1FeFRwbA?oc=5" target="_blank">NIST releases new AI attack taxonomy with expanded GenAI section | news | SC Media</a> <font color="#6f6f6f">SC Media</font>
- Efficient black-box attack with surrogate models and multiple universal adversarial perturbations - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1iUVdQQVljZEhhSXBEWkNuTDBDUDVGb2c0am5NbUxlZVhtRWlfT3VsYTExMjZBVENMQVRka1BjOFR2cW1TRFNKcXZWMjZpRGx2X3BXODhCWjY4Z0JjeXRZ?oc=5" target="_blank">Efficient black-box attack with surrogate models and multiple universal adversarial perturbations</a> <font color="#6f6f6f">Nature</font>
- A multi-layered defense against adversarial attacks in brain tumor classification using ensemble adversarial training and feature squeezing - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE5wZ0ZLdllyano5dnp4dG90d04xUWE2UmNGaU8wRU9KWU5JTXVIWnc0OE1uRjZRN3hWcENqbGtKQlN3T1hpaUZHZno2WTQ4WHJfVlRSbmxQbjVEa0dfWG5z?oc=5" target="_blank">A multi-layered defense against adversarial attacks in brain tumor classification using ensemble adversarial training and feature squeezing</a> <font color="#6f6f6f">Nature</font>
- Generative adversarial networks for creating realistic training data for machine learning-based segmentation of FIB tomography data - IOPscience - IOPscience— IOPscience
<a href="https://news.google.com/rss/articles/CBMickFVX3lxTE9fZTVja0lqTkNyVnJqSWhfcDQzem9EX3ZpLTJRbWFrX3p2R1JKeHRieThhSGVoRlpmZk5xZXdib25SWDFoOHc2YUhESGI5dlJVbXRvOFlxWkZ4ZmhLaGJZTmpYaTRRSjFUZ1V6MU83WEFMUQ?oc=5" target="_blank">Generative adversarial networks for creating realistic training data for machine learning-based segmentation of FIB tomography data - IOPscience</a> <font color="#6f6f6f">IOPscience</font>
- An enhanced ensemble defense framework for boosting adversarial robustness of intrusion detection systems - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE9fc3lEcmQwQkstSWJjTjdPOG5nWE53cHZXMlhtOFp2amRkd3U1c3I5Nk5pc2U0MmZlMUpBaFhFN1RYQnV3bm1lbHRwWUw0bG4yeFVvUjFoM3VHNlMzLUs4?oc=5" target="_blank">An enhanced ensemble defense framework for boosting adversarial robustness of intrusion detection systems</a> <font color="#6f6f6f">Nature</font>
- NIST's adversarial ML guidance: 6 action items for your security team - ReversingLabs— ReversingLabs
<a href="https://news.google.com/rss/articles/CBMiowFBVV95cUxQLV9TOERtNmQtQVRJOHFHTldfUzhUMzVRX29ILXBldmlLUmlMUGQ3ZXF0SVVBeWlJOGpxYnpFQ2Jwdi1adUZQM05yOEp4UlZ4VDJoNmdfekpqWlZNQWdTbU1VSmpwWDZfY29GSDNuQ2lBSEFZdUlnWjdfbUNxaTVsV0tPa3ExeXp2Ylh1ZXZCRG1ydWNHZE5UbjFPS2RldVo3Q25Z?oc=5" target="_blank">NIST's adversarial ML guidance: 6 action items for your security team</a> <font color="#6f6f6f">ReversingLabs</font>
- Defending against and generating adversarial examples together with generative adversarial networks - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBVNWl1ZGMxdVJDb2FVcFVoc1psbjNsRkRsN3duN1FVTnB6YXlwdDlnWGlBNmdLT2FUWUZ6ODJMRUZOdXE2bzNUenR0M2V3dnRrTGpMVUNVTnl6RXRCMFc0?oc=5" target="_blank">Defending against and generating adversarial examples together with generative adversarial networks</a> <font color="#6f6f6f">Nature</font>
- GEAAD: generating evasive adversarial attacks against android malware defense - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE93b2k2MTZWazFYcmN4YnpRVW1vNVBtdkh5alEwRGVUcXZjcjQ2WWNYT081bXdlQzVCUEFLVlVKZUVEMFhTWWZBT2pQM090S01Bb3BRc21SVEtXTGpVZ3VV?oc=5" target="_blank">GEAAD: generating evasive adversarial attacks against android malware defense</a> <font color="#6f6f6f">Nature</font>
- Tailoring adversarial attacks on deep neural networks for targeted class manipulation using DeepFool algorithm - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE9Bbnl1ek5JbnZyNk91N2pKWWx6QlpKWklHTmZDYnU4eThVcU1kZE82a1U5MzdIT2JSZnlrZjJnamtreW5QOWdzMEh3V1UzLU9LWWxTMU94RW12RVR1R2RV?oc=5" target="_blank">Tailoring adversarial attacks on deep neural networks for targeted class manipulation using DeepFool algorithm</a> <font color="#6f6f6f">Nature</font>
- NIST Publishes Adversarial Machine Learning Guidance - ExecutiveGov— ExecutiveGov
<a href="https://news.google.com/rss/articles/CBMimgFBVV95cUxNTXNoTHQzbGctWU5qUWtKTmVfTjRfRDhpRGlVTGxzajR4YXFIa0p3aGtleDlsa1ZEb1JGOEpDMXpETFdhTm1kT19fU1N0SF9zUEtpNHVIQVhCRDhMb2VwMnVfSU9uYVNxdEV5U0lTV0Jfb2dpWmJ2LVJzVkdqeThVQl90azRpSXJldFBWSEl2Q3lJTWZfMzRUckd3?oc=5" target="_blank">NIST Publishes Adversarial Machine Learning Guidance</a> <font color="#6f6f6f">ExecutiveGov</font>
- NIST Releases Final Report on AI/ML Cybersecurity Threats and Mitigations - Homeland Security Today— Homeland Security Today
<a href="https://news.google.com/rss/articles/CBMizAFBVV95cUxQenBBa2FVcXYzZkM3WEJIRTF4M1UxSkNadFBPUm5aYll1ZTRKYXczOXpUX1VSX1NVQi1wTExJMUkyOUI5QjJsQkdySDlTd2xha1ExS0EyekxjMFF4VktVQ2x3TmZSWHdSb2RFTWpMaHVXeXNjNE1OY1JvM09rNDFlajVKcEZaZ0xQUDJVdkpOWEZFSWJPNzA3bmJLT1FEOW55Qkd0WGp2YUFQM2FCOVpMb3VQMUlCV0lhbmtvQ0ZpY19LWHNodHN2VWpqMjU?oc=5" target="_blank">NIST Releases Final Report on AI/ML Cybersecurity Threats and Mitigations</a> <font color="#6f6f6f">Homeland Security Today</font>
- NIST releases finalized guidelines on protecting AI from attacks - Nextgov/FCW— Nextgov/FCW
<a href="https://news.google.com/rss/articles/CBMivAFBVV95cUxOSFFuelZkYTAwS1RvSlpubmFWWTN3ajZGYU53dVlCYTh3OGduUkdRTFhMSTF4Y2RxSktITVFvSkhvdGxWLUR5YlFxWnlnOHJKT1Q3RmlncmJwZTJ1bzItRWJyRnpQRjdacWpqdUlackdUWFpMXzE4emx5dEN1NUxRSHg2bmhwcjdSQzE4aVZ0UXJuWkx3U2o5UlZURk5NbG5QVzBHVUZHWFU1T21KMjRUQV9vbnpQOGFYREFLVg?oc=5" target="_blank">NIST releases finalized guidelines on protecting AI from attacks</a> <font color="#6f6f6f">Nextgov/FCW</font>
- Cisco Co-Authors Update to the NIST Adversarial Machine Learning Taxonomy - Cisco Blogs— Cisco Blogs
<a href="https://news.google.com/rss/articles/CBMipAFBVV95cUxPYkJYcEtVQkNCdWc2a19iZS1ZZXk5Qk5GTFNrd0c1VUdNX1dpa1ZRU09DUHhNd0FRVGtJcWVaamozVmg1aGQwRDFyc0t1Q1VYRTg1ZTNJZ1Q5cUFGcUNZZGlyNDJfMVd1eFowTE9GXy1WZ1c3N1NNbEZycEFLZGVXUzBvZmZSMlhQU3JTVmhwUXFTUVB3SnhyTW1UQU9pLTlSRDdXbA?oc=5" target="_blank">Cisco Co-Authors Update to the NIST Adversarial Machine Learning Taxonomy</a> <font color="#6f6f6f">Cisco Blogs</font>
- Universal attention guided adversarial defense using feature pyramid and non-local mechanisms - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBWSjBSRjlsM3hvS2xWMmdDMkt0bXJPRlJLaEcxV1BNdE9EZC0xNG84bmRvZ2ctZmhGS09NMnFSenFkTzFSd0xZTzMyYnBjS210aVFzMWIyX01LSnRMQllz?oc=5" target="_blank">Universal attention guided adversarial defense using feature pyramid and non-local mechanisms</a> <font color="#6f6f6f">Nature</font>
- 3 Questions: Modeling adversarial intelligence to exploit AI’s security vulnerabilities - MIT News— MIT News
<a href="https://news.google.com/rss/articles/CBMimAFBVV95cUxOYmpQS0taSWliNDNqVzNIeWxxbmJMRVJzWTROLUFTR1lZWFlldFhNT3NkVGhjbWwzVktDeTVwZHgtZDk1U0Rya3dLSFNFc0N1emZwVWZaRzdSRkNOV1NweFcwc3FZbXZJcUV0MjE5bDcxTllmQjlxYW80TDA4TmNnM2IyZVJUb05BX1EyQkZEVEM0eEk0VGJOUA?oc=5" target="_blank">3 Questions: Modeling adversarial intelligence to exploit AI’s security vulnerabilities</a> <font color="#6f6f6f">MIT News</font>
- Multi-Source Stable Variable Importance Measure via Adversarial Machine Learning - Vanderbilt University Medical Center |— Vanderbilt University Medical Center |
<a href="https://news.google.com/rss/articles/CBMisAFBVV95cUxOa3Z5OTlnZktadnVOczdmc2ItQ1ljUmNybW5xcXhWNHAzLWtxZnY2OUhMTGF4ckV0ZjdrdURwanZSR2FkUnFYS0NoNEVjWTdoMF9JRnd4SmVMcjdxSmp5Uk1ocWlSQWFneF9xSkVWT2l1cXItTi1xOEhtMnF0c243dHRDaFZvRTBldjZHT0laSnNMSThWRkxkTXRSRWRpUUZzRnhUUzhGQnUzaDNwSnNsOQ?oc=5" target="_blank">Multi-Source Stable Variable Importance Measure via Adversarial Machine Learning</a> <font color="#6f6f6f">Vanderbilt University Medical Center |</font>
- Adversarial Machine Learning in Cybersecurity: Risks and Countermeasures - AiThority— AiThority
<a href="https://news.google.com/rss/articles/CBMisAFBVV95cUxPMjZzbUZpT1g3cWpDM1hrSHgyaEVLbnRxSXJnOWdUcE4ycFZtenJiZUFfd0JYZlE2eFlUYXFLcTdWV1pnOUVaVmswWHBEeG9TWGlNd2RYQWMwOXJkTDZYYUdCZ0U3QTRHQnEweVA1SHNSWFQ1QWE0RFZmUHVTeGxOTUlQMzR0MUVLZ19WNHU1eFFTM1lDTjNKLXlodUplclFfbGRFenJ6czdiM01IdUhUTQ?oc=5" target="_blank">Adversarial Machine Learning in Cybersecurity: Risks and Countermeasures</a> <font color="#6f6f6f">AiThority</font>
- Adversarial Attacks in Explainable Machine Learning: A Survey of Threats Against Models and Humans - Wiley Interdisciplinary Reviews— Wiley Interdisciplinary Reviews
<a href="https://news.google.com/rss/articles/CBMickFVX3lxTE5SUmJVY1c5cHVyZFg2eG42emdGZUZCaUd4bjRBbmp5M2Q4YU5WVG9wWGJ3U1JmU05Zdk9NNWpNV2JwcTFVQ0VkVTlycXNGLXNYSnk3R3BQTWFFN0FCbnRMMFEzUVpCLVFXdWltRlhYNTl6UQ?oc=5" target="_blank">Adversarial Attacks in Explainable Machine Learning: A Survey of Threats Against Models and Humans</a> <font color="#6f6f6f">Wiley Interdisciplinary Reviews</font>
- Adversarial robustness in deep neural networks based on variable attributes of the stochastic ensemble model - Frontiers— Frontiers
<a href="https://news.google.com/rss/articles/CBMilgFBVV95cUxNd2J4T2NiQ1QySnEwSDMxYjRHaFpFVlREdW1VXzJBNTdoYkMzTldKR3pUdGtoZi1vR0JmVWxrS1Rmb0tOR3prNU56NnFjaVdCNVQ4VFZ1UkxNN0RoRFJNZk1EU0w1MTMxNVBSYzlUUXZFRDBja241VngydUFVbFVxeUNfZm5oVFp0cnJYVjEtdTJBeHBwdUE?oc=5" target="_blank">Adversarial robustness in deep neural networks based on variable attributes of the stochastic ensemble model</a> <font color="#6f6f6f">Frontiers</font>
- Securing Machine Learning in the Cloud: A Systematic Review of Cloud Machine Learning Security - Frontiers— Frontiers
<a href="https://news.google.com/rss/articles/CBMijgFBVV95cUxONkdpMDhiUW5sbHhzMkNmdjVqbDRCU0FLSkpRUUN5RER2bWZvN01DdEpickpTUkppX1V2SEhEWUdrOTJXNmF4ZVhOcVplcHFCVUZUeVBMQlREVEVJZlRpakRyVGtBQnAyRXd5VG8wWEI5M3NnbnA3Zk5vU2dvN2FNNDNRYlJYeUNETktjN3ZB?oc=5" target="_blank">Securing Machine Learning in the Cloud: A Systematic Review of Cloud Machine Learning Security</a> <font color="#6f6f6f">Frontiers</font>
- Adversarial attacks on neural network policies - OpenAI— OpenAI
<a href="https://news.google.com/rss/articles/CBMifEFVX3lxTE9vMEJ5eDNIZXV5QWFSQmVqaHNham1uaEN6Z3VGZDZ5UkRSQ18za3E3N0U4WnNXaGUtRmVkM0paVVdIMjU3VzkxWWdGVE5LeGlHM3VNOVk4ZVZfUGd4RXliQTlJMklOX1k2U2lqNkFmLTA5R3F6bnRueHhiVlI?oc=5" target="_blank">Adversarial attacks on neural network policies</a> <font color="#6f6f6f">OpenAI</font>
- Emerging AI security risks and considerations: key takeaways from the NIST adversarial machine learning report - Osler, Hoskin & Harcourt LLP— Osler, Hoskin & Harcourt LLP
<a href="https://news.google.com/rss/articles/CBMi5wFBVV95cUxOWjlWblVlbVNrdjJ5Z3N5bi1fNWRfdjZUWDJuZ3loQ0NmRzdGMUIxWUtPeVBFRFRnVEE2eFVRYzRYQkN5LWhfM1FFS3NBYVlnSFRQbVVqTWd4VHAtUTgtLTA3anVaaHhUdVJFbUJhd19YUFM4SDdMaVJSTThNNFdCR0NPc2tRbVdURFRudHBoRXExSEFreUpzQlVTb1oxdzJ1N3VLT0dSbWF5N21oMC1qenA2VC1kX19QS045Qklhd0FJNld2N0hsSTNaUkpTWXpvQUFFOU5vSjRxNVpUQWVtYlNqSHFIZDQ?oc=5" target="_blank">Emerging AI security risks and considerations: key takeaways from the NIST adversarial machine learning report</a> <font color="#6f6f6f">Osler, Hoskin & Harcourt LLP</font>
- Safeguarding AI: A Policymaker’s Primer on Adversarial Machine Learning Threats - R Street Institute— R Street Institute
<a href="https://news.google.com/rss/articles/CBMiswFBVV95cUxNSGc2QzVOTlRVSjRyd3o3SkI2VHJhSDZ5U3M4Z0dOMUJKWXR3ekhVUHUxSXk3Z3dUTDJJd2tEaERZbTU4N09ZVUdIT0lXMmZIckV4TWh5QW4xZTdKYjZrVDhhN3k3cjNBOTgtZmlxQjR0T0F6RWNua0VvNlZ5RF9fMU1rbGRvR212TTJYWVJBVE1ndEZBTWxSTC10a0NBMUw4bmtyMTIyNlZWd0FLY1R5RjZrcw?oc=5" target="_blank">Safeguarding AI: A Policymaker’s Primer on Adversarial Machine Learning Threats</a> <font color="#6f6f6f">R Street Institute</font>
- Adversarial machine learning: Threats and countermeasures - TechTarget— TechTarget
<a href="https://news.google.com/rss/articles/CBMiqgFBVV95cUxQUUdqS01Zd1NzTkxYTm9QUTBGRUl2TUE1cTRZVEtRLXBIemZOZTVEc0hXUVA4eHdEdVBZYUhLWVhxaDJPRDk5NzNhVjRnMHBVX2RQaElYRG16VFpzbFdNUGpGNTZBMGUzcm1NNVFXaG9yYlNTMVEwX2ZjMU5vaVBDckYySk9qN1pnLU03NnFCSjVibVBja09YSEotQ05oZHp1R1hGb011ZThndw?oc=5" target="_blank">Adversarial machine learning: Threats and countermeasures</a> <font color="#6f6f6f">TechTarget</font>
- The intersection of AI and Cybersecurity - Reply— Reply
<a href="https://news.google.com/rss/articles/CBMibkFVX3lxTE5rMENwZEVib2pCcFhJMF9fYzRBZGxYQlpkVGphTUhtb2tYVUZPYTJ0VjdPczM3bEQ4Y3N4YzIxSS1TNzFTOWRDc0NJcVhOQkhMclVGMDdNeU81Z242Q0R1TjR4dHhiWllSeExpWUdn?oc=5" target="_blank">The intersection of AI and Cybersecurity</a> <font color="#6f6f6f">Reply</font>
- Subtle adversarial image manipulations influence both human and machine perception - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1VYWtBSlM5QmFhOU9iQkM5STlaZGsxcmVVY3NwVHVIT3lUeVVzakxSX2tBWVRMYUJHVXNxQnRqNW9hQU9mU2xzZVplal80OHU0SnlVWklrbHpjR1Y3WW8w?oc=5" target="_blank">Subtle adversarial image manipulations influence both human and machine perception</a> <font color="#6f6f6f">Nature</font>
- Towards quantum enhanced adversarial robustness in machine learning - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE9XZ3ZWWXRkZDd0aXV2N0phN1FHcC1BTXMtS283WURPcXdZYnBpSDRETlJ5aFF1N2twUC1fX0o5UUtxNEd3dk1kUDR2LUxMM0hIRGNSbWNFVllqTEFoc0NZ?oc=5" target="_blank">Towards quantum enhanced adversarial robustness in machine learning</a> <font color="#6f6f6f">Nature</font>
- Leveraging the Dark Side: How CrowdStrike Boosts Machine Learning Efficacy Against Adversaries - CrowdStrike— CrowdStrike
<a href="https://news.google.com/rss/articles/CBMitwFBVV95cUxQczcwak9QZG5rNC03SmVXaklQM3dMNlJhSm1jYTVQU0F2NEh1YjVNWFpySU1uWmNBd1I2RzN0dU93VTRIWElVUlZaODZJYzIwZGdCbkdrQVBRTmExYU9UaUVXSC04bTdjN2dHRjJueFU3bU94dXdob1AwNlkwdHh5dVpaZG0tWXlvaS1OcVY1ZnZOeU1TZW5Tak9CY1JmQmdLUThUNDlXNUZlWGxWOFRIdVhRMVE5b00?oc=5" target="_blank">Leveraging the Dark Side: How CrowdStrike Boosts Machine Learning Efficacy Against Adversaries</a> <font color="#6f6f6f">CrowdStrike</font>
- How to harden machine learning models against adversarial attacks - ReversingLabs— ReversingLabs
<a href="https://news.google.com/rss/articles/CBMijwFBVV95cUxNbzhSbm40VGIwd042bHV1R09LOHo3ejFweTZxU2dvcUZBNk1XVUlaZXV5T3hiOERUaloxM19TeEZsbWcxTTRPbjBQT1JDOC1ldGpWTFZZM2V3R0FJSUhBaGFGdi1oNldoSmh6bzdMb1h3RGtqWmRUS3Z1SnExVThzdVdKYU9mbEt6VnVRTElpaw?oc=5" target="_blank">How to harden machine learning models against adversarial attacks</a> <font color="#6f6f6f">ReversingLabs</font>
- The challenges of adversarial machine learning in constrained-feature applications - TechTalks— TechTalks
<a href="https://news.google.com/rss/articles/CBMijgFBVV95cUxNS0tjblZBbjB6OG9qcHJuR2V0djh6UjNqa0ltOWdPQjBnWVJpVmVXNWhjV1lGRnNnQ3ZwZlN5VnMyenJpUEx6SUNpVGNjM3dqQ0pFVjBEemg5QmwyMnVaeENRcjREdWNNelo3eHVkX01rbkNCUzB3V3Njdm9nLURtUWtUUUlnT1ZGNkQ0VG9B0gGTAUFVX3lxTE1CX3JkSDdaTGdYR0cxVldnV1BrMWJTRllHczNMUXJwY0RZWm9aVmdocTNyUExyeVl6UmdCRzFiLUN4ckxuSUI2MjM2clJCUkthaWFyNUJ1cUVxUG95VWF2MjFMbGFfRnN6NDdZSGo3SnBrcFpYLWR2MXBld3lZYTBBWGNYSVBtWUo3QXBGYVhTUW9jYw?oc=5" target="_blank">The challenges of adversarial machine learning in constrained-feature applications</a> <font color="#6f6f6f">TechTalks</font>
- Experimental demonstration of adversarial examples in learning topological phases - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBDQkk3dUFzekozZ2pQTVIxcVBiNHV6UG1JOHRKajRjNDI0bHgwM3ZidVlNVklRTTU4OV9hN1BLanpyeWprNVhpeDhSNUpubFRMQXZEaDJxX3c0MFBhTENz?oc=5" target="_blank">Experimental demonstration of adversarial examples in learning topological phases</a> <font color="#6f6f6f">Nature</font>
- Physics-Aware Machine Learning and Adversarial Attack in Complex-Valued Reconfigurable Diffractive All-Optical Neural Network - Wiley Online Library— Wiley Online Library
<a href="https://news.google.com/rss/articles/CBMicEFVX3lxTE41azAxQXMxLXFGWndoNTdrT3plcENSR0NQdUhOUFJoWXlZX1A0TDNrd2k3MU16eC1kbXJCR1ZCVEp5R3ZMMm9VcGlzRlJzYXpNaUtIajd3V1E5eVgtZGExTlVsUDF0cGJRSnlFQ2Flc2c?oc=5" target="_blank">Physics-Aware Machine Learning and Adversarial Attack in Complex-Valued Reconfigurable Diffractive All-Optical Neural Network</a> <font color="#6f6f6f">Wiley Online Library</font>
- Adversarial Machine Learning Poses a New Threat to National Security - AFCEA International— AFCEA International
<a href="https://news.google.com/rss/articles/CBMirgFBVV95cUxNdWpCTVAwWk54MDFqRXZGZjZMc3hvU0V2X0UzSF9xRms4V2lBZnhYU3FKWWpxZkRnN2M2Q2I0bEoxaW5zd1Q0OWRZNXVYTWdQWERSY01qbzl3NmFMWmJaWk83SjVzSTUyVlJ5MFlSNzJ3OGdwUHZHWWdKaHFiTEZ0ekNtd2tiZFE1eUVrcUZxSDhyTWtiMG0xQWpfWnFENGxwNkZpdk5fRWd0aW84SEE?oc=5" target="_blank">Adversarial Machine Learning Poses a New Threat to National Security</a> <font color="#6f6f6f">AFCEA International</font>
- Adversarial machine learning explained: How attackers disrupt AI and ML systems - csoonline.com— csoonline.com
<a href="https://news.google.com/rss/articles/CBMiwgFBVV95cUxQWkZOdUJXaEYySVFDcVg3cnhnRWw5bmtYd1VNYVRORG9GcEFxaFZjdzJYQkNrdkF5M1ZaelFjM0pWS1JzcG9ack5yblF3ZFZ3M1dxZVhBWkJSMGlvWkQ5MkdwNlZkdi1ZdXRfNTZ4YlBnb2QxT0J4d2tSU09nSEw1U2ZKM3RhaGdvLUF2XzBydThrSUphVkowalB3V2VWQVhtU1pQMWRSTHc5RWprYTZfTmZ5QXkydWJZeDdXQjczVXI5UQ?oc=5" target="_blank">Adversarial machine learning explained: How attackers disrupt AI and ML systems</a> <font color="#6f6f6f">csoonline.com</font>
- Adversarial Machine Learning: A Beginner’s Guide to Adversarial Attacks and Defenses - HackerNoon— HackerNoon
<a href="https://news.google.com/rss/articles/CBMiqAFBVV95cUxOTGZGYWxIU2N1TDJsaWVqX2hpQnRPeDQyWEIyNmd4Nk9xamtRTUZ5eHV6ejFwSTFJcWNZdXlpTTA3UmJ0SjM4eU1jMnBid3BJUXFvUFJTWVg5OF9NR051TllvaEQ3ZzZMSEJxWHo4VE8tZlp6Z0lTdWt3RUlwY3poUW5YUURrUEJiclZXaUdFQmhFT21CaGdsVGpWUEtyUVBxeHBzNEVjelk?oc=5" target="_blank">Adversarial Machine Learning: A Beginner’s Guide to Adversarial Attacks and Defenses</a> <font color="#6f6f6f">HackerNoon</font>
- Reinventing adversarial machine learning: adversarial ML from scratch - Towards Data Science— Towards Data Science
<a href="https://news.google.com/rss/articles/CBMiswFBVV95cUxQYWZNbmxpa2p0MlMtMHBGYnFXbzBWc0RyRHNKSlc0cmc0ZldGOHVwYjYyNjFHOERmMWM5RjdaWG4zUlg5d0NGRnVBVWxYNnJxbVd0MXN4a2tkMm43Tl9nU2tHcjMyamc4YkctNjJVMk1STmVWZUhod1ZHa0FLZzNrNmpzcTBuazFHb1c3NTFjUHdOTmZFT3JCamdBcXFuWkZ1RmUwVW41djc0TXhzMFVFSTNkTQ?oc=5" target="_blank">Reinventing adversarial machine learning: adversarial ML from scratch</a> <font color="#6f6f6f">Towards Data Science</font>
- What is AI adversarial robustness? - IBM Research— IBM Research
<a href="https://news.google.com/rss/articles/CBMihgFBVV95cUxOX1dTaUJBQ0UyLVdlTmNUbXZCNkxaQXEtNFZqZ2NjWEYydFVvYWRzYWdfMWh1cFpQUWZkdldjUGV6YkFoRzZQUUhVYUFZbzZjTUtqMnFxUGNFYTJLWmlwS0UwZWdIUHBMSnQ5MFE4LXFhUkE4dTFpY1N5RUtHbXFLaHVjV29ndw?oc=5" target="_blank">What is AI adversarial robustness?</a> <font color="#6f6f6f">IBM Research</font>
- A machine and human reader study on AI diagnosis model safety under attacks of adversarial images - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFB0bGNGQU91MHZIMFNjUGRfd3NKZFVtWGNMQTdkUnNMU1VHaXA3SzVncmZPSzR0R3E3YWE2cm0xSC1weWpHaGZhUy13N0VFSU5oOV9TNkJzVHVXNDV1QWRj?oc=5" target="_blank">A machine and human reader study on AI diagnosis model safety under attacks of adversarial images</a> <font color="#6f6f6f">Nature</font>
- A turtle—or a rifle? Hackers easily fool AIs into seeing the wrong thing - Science | AAAS— Science | AAAS
<a href="https://news.google.com/rss/articles/CBMinwFBVV95cUxPd0hiSHNlRlpzaGpEemhwZ1NFMi14UkRjNEdWUGRCSWRzYUt0SGRuSzB2a0NVclFmVWlxVGlaa2VDb29zUGdmNXpYeE5YWFlzTUp6VGtLRUVFVjEwZWRZelFDUG9oaFZ4bnVOYU9NT2cxYmozYnMzQmZXaUp3NnFJdkZHM0J0VHd3M1cxRnhySU5JaTJQUERPN2RQWWU5ZlU?oc=5" target="_blank">A turtle—or a rifle? Hackers easily fool AIs into seeing the wrong thing</a> <font color="#6f6f6f">Science | AAAS</font>
- Adversarial interference and its mitigations in privacy-preserving collaborative machine learning - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTE1hOHBZZjdySzVpUFI0OWpIeUlfSDlnZDZNemtUWmdnNGwyRTI1bXZuazcxbVRrczNVNGdBUjFqbUtILUJRUkdfUnZINWZTQS0zb3B6cTZpUXMzREdGUmg4?oc=5" target="_blank">Adversarial interference and its mitigations in privacy-preserving collaborative machine learning</a> <font color="#6f6f6f">Nature</font>
- Adversarial attacks in machine learning: What they are and how to stop them - Venturebeat— Venturebeat
<a href="https://news.google.com/rss/articles/CBMirgFBVV95cUxNNEM5cXdYU19hX04tbW12emhyMHBCRFFJUzVRTjI5SDJlUm5WY3NndG5vME1SVTJQYjdyanNuU1JfcWlrUEY2SWVuU3lXMVlUTkRMYkVFRnBSTnZjcWZ1Uk81bnFuWWVkSGpfMWtqTDlQdDR2M2t3NTJWUWs2WDJzV2pjVkQ2RnAtVmV6LXZsUUlBcjQxNEwzYWp2U2xaeXlkcnhYRzhYbzZTdGZJMkE?oc=5" target="_blank">Adversarial attacks in machine learning: What they are and how to stop them</a> <font color="#6f6f6f">Venturebeat</font>
- Key Concepts in AI Safety: Robustness and Adversarial Examples - CSET | Center for Security and Emerging Technology— CSET | Center for Security and Emerging Technology
<a href="https://news.google.com/rss/articles/CBMipAFBVV95cUxOMUtaY1J6eXFYMTdjS0VseDhlV0ZDbXVUQ09ycDdkVU1Ta2NMdWlWZEY4aE1oT2tXNGVBS1ZpdTFNMVNoZTdBcXJObzh2dzZ5dGZWcm50RVh5UEw3bnNKRVdUREFlLWdfTFFYU3FIVFFQdU56djZaRjJwS3JrSE4xVU1TYnBOOFA0dXFDS040UHpKM09FbTBpWTl5ZWdKdW9KYjY0bg?oc=5" target="_blank">Key Concepts in AI Safety: Robustness and Adversarial Examples</a> <font color="#6f6f6f">CSET | Center for Security and Emerging Technology</font>
- Adversarial machine learning: The underrated threat of data poisoning - TechTalks— TechTalks
<a href="https://news.google.com/rss/articles/CBMieEFVX3lxTE1JR0FrZVFCVHF4ZXhjZWNvWm9SUEpIWnh3S3NkOG1vMktRVFM3YjlLdDdoN0Ywck5vdHBfRnhpb084M3c1VVJ1NTFabzNjUmhYd1M1R3JzNkNjMGVZazBuTjV1MHBRVFVjS2FlVnJRNUNYUExwaGZNc9IBfkFVX3lxTE9ObDNMTmhSQlYtVUIwbGxibGMwemRvclVSWFhZYkFtMmJqX1RFRHlBY0hLdS1CbDBRby1HMkFqa2Q2QnE5MFk2WFRSSHRrT0VpSTdkajB3WUp5NlhjeVZ4QkNOYlFLQkt0dzA3VDlPVDdhR004aXoyRzNJTjJCZw?oc=5" target="_blank">Adversarial machine learning: The underrated threat of data poisoning</a> <font color="#6f6f6f">TechTalks</font>
- Algorithm helps artificial intelligence systems dodge “adversarial” inputs - MIT News— MIT News
<a href="https://news.google.com/rss/articles/CBMidEFVX3lxTE9vUHJSRXY3ZFF0VkJFZG9yRmIwN2tldXp2VlBWczZOckxLcDJaa3NOOE5NU05vaURhRU9CemJ5ZW1xWHBWeDUza21LUGNvdlA2RzhRaDllZ2RBb0ZHOGpFUUtVMk9PX3NsQ0RwTi01a1dUMjRn?oc=5" target="_blank">Algorithm helps artificial intelligence systems dodge “adversarial” inputs</a> <font color="#6f6f6f">MIT News</font>
- Machine learning adversarial attacks are a ticking time bomb - TechTalks— TechTalks
<a href="https://news.google.com/rss/articles/CBMiqwFBVV95cUxNV1M1RzZYMnFKam84dlpaR2hEcmZudlh6bjlsNFlHU2FPc0o4aUxfOE1ncFNOVEszMFY0Rk5YY0JqZWhpWloxbXVaYjNEUFJhVFljYzhURU8xeTRaZUQzcXBDNDdyd1dFU0JBUlYxdEZtdW1aend1cjg2bDI2bGl0NDdQUGRJSXhOWDBqaHRrWHRpbFRYZUVaX2JpeFVZTjRDdmJIZFJLdzFLYknSAbABQVVfeXFMTlZIVWtndV90cGdLdmpvMjdwR0VlMm95MHBPa0o2NXRYcXIzYXF5cFdEeE8ycHVKeFJrQW1BU3ppUUtRc1lQUk1Ld1pNdnM1bWR5MkFtWXluR2trWlJRNkJOU1JhWjFtR2ZySXdBOXRUR0VFWUtHX05FZFdMdXA3cXlDb09VSE95bWs4VW5Xdi1TMXJGa2MtU3Z1MkhsWkhRT1JCZDQwNEYzZHpDRnE0T3Q?oc=5" target="_blank">Machine learning adversarial attacks are a ticking time bomb</a> <font color="#6f6f6f">TechTalks</font>
- Adversarial machine learning and instrumental variables for flexible causal modeling - Microsoft— Microsoft
<a href="https://news.google.com/rss/articles/CBMiywFBVV95cUxQdEhZeTVsejl4ZlNxLU9MUTBtcGFVb19CQ1JsNmNmRWxiODV2MGZKb3Q2ZnU4RUM3THpFdXdCS3ZVWlU0QUN5LVNzOWJqRUVMdmVGekpLTHBTbGhGT3dVMmYtWXE1UFdsTUtMYnVGTzNjbEdseUZwQnJrQl9nV0hLd1B4SDBIR083d1JUQmwwMjRSVmFsQThsVEpMdDk3WVBGeE1EQzcwQmR3MU1ud0lncndLR3pqNEtsdzNyUTNweHVQUzRYeWFzVFMxMA?oc=5" target="_blank">Adversarial machine learning and instrumental variables for flexible causal modeling</a> <font color="#6f6f6f">Microsoft</font>
- Anti-adversarial machine learning defenses start to take root - InfoWorld— InfoWorld
<a href="https://news.google.com/rss/articles/CBMirAFBVV95cUxNbUtGdncyVExBZ18xR1dJRFVZZ2wyRldaa3BrZGdKRHc2SVV2a2k1d2pobTBoZGxudVFGRVhJWXc5X3haWFJVOF9laXlFSWJ4QjFTVmRBQ0lGbjVKRE54VWdCOV9jSzNnbTZzd2tKaVpRSlJPTjRvcHpuM1M2ejgzQW13Tlc2SXpMM1BXS3FxT2FXMU5EZjRQMVNfX29ISy0xYXhmN1Zxd1BodTR4?oc=5" target="_blank">Anti-adversarial machine learning defenses start to take root</a> <font color="#6f6f6f">InfoWorld</font>
- The security threat of adversarial machine learning is real - TechTalks— TechTalks
<a href="https://news.google.com/rss/articles/CBMihAFBVV95cUxNd2ZnWjQ5UzNROU1CWF9LUzJmRUpaMlhfbEE5YzFiUjczUTRRZjIwMExiQWpQWGkyMWx2OXJPRjRnbGo5Y3hNanRmbEREUm8wckRueXZRbFIybVlHc3F6MENjQ3phQzdwZzh2aU9BYjZHbmpNWmtFUFNySUFMbjNhbjZFRU3SAYoBQVVfeXFMUHhJaVB0b0htMHZudjFad1FWUFlhRzdSVFFWREstVlFsVTRCeTlfUjZQbkwxNFpjdXFWQ1Z3NENDTzVEWHhDbW5yaGFtRGIzSkZ5S1ljc0Rmc3lqU1hGWFRMU2dfTVUyYnFZOGozNUt5V2lpYVRqUVptRWItOVZzZzlHWG8tNHp3WmxR?oc=5" target="_blank">The security threat of adversarial machine learning is real</a> <font color="#6f6f6f">TechTalks</font>
- Cyberattacks against machine learning systems are more common than you think - Microsoft— Microsoft
<a href="https://news.google.com/rss/articles/CBMizwFBVV95cUxPS2RpaXdGLVRybG1sbW5RZ3BXcHJNOGxyTVozT2JLYjNzN1hEWWJWU2FNWWk2azQ2SEIzeHVEbmZVZFIzdncxbjBET041aVFaR3ZCYVJpOHZTYUtuTXBFQlo5T2hwdFBFbFZyRENqVXV6eGR4LU9mUVlmMzc4ekJYNF9jcEFlUU13TUxMYzk4aFdhYndjemFHR1U4MmNIRVFUMkw3RElXNzBIUDJHaG1YNzN2bHEzcklWOVE3djBtcnN2c2JFTEFjV19DNExlMWs?oc=5" target="_blank">Cyberattacks against machine learning systems are more common than you think</a> <font color="#6f6f6f">Microsoft</font>
- Image-scaling attacks highlight dangers of adversarial machine learning - TechTalks— TechTalks
<a href="https://news.google.com/rss/articles/CBMihAFBVV95cUxNbzJkTWtoMC1NUklaNnYzQ1BieGlDRUdxM0hJQkdzeXg1bDlQLTkwQnlLSTNXdnMwTFBISWZBUTBTV1pZVl9CQWk2MFBCSjRZRndXWmlPeGZPMHhoRy1wWTNYM2tsV3p0T253ZHVOS2ZMNVBZU0Y2QkNoSEVpeTRMbGRSaXTSAYoBQVVfeXFMUHJLYlgyQzZmbWxQQ3QxYkhZWm9RTjBodHlSX25wZS1wREhUamtmQW40TWc5WjlxVTZSLVdVSzJ2cnJHRzBqNlhzdTFqLXRfTHZVWVh2eGdCeVFrWEFNUDBJVVJWalFjaXpkajNrU05vM0FSQnRaZlRWWFNqSGtfV2twR3RfQ0pzZ0VR?oc=5" target="_blank">Image-scaling attacks highlight dangers of adversarial machine learning</a> <font color="#6f6f6f">TechTalks</font>
- What is adversarial machine learning? - TechTalks— TechTalks
<a href="https://news.google.com/rss/articles/CBMifkFVX3lxTE5Ua3NaalNVWDlCSzA3YzU0d0VnNlJTeGhQSExPZEZCc3FWQUx2ME9QazBvNjVEMU0zby1yd0FrSHdUU2ZjNDc2VVhuVmNYT0pidjlJMzFLQk5fMUxmQWRCeFY3OG1wYVZEdHJFOEhpeU42LW45TTNka05VMGJXQdIBgwFBVV95cUxOQ1BJbHg5dmVURnpFX0JYTE1zUHlPZWd5WVVYX2tWSTFBRi02akZmbndwNFNjNFg2ZmJZaHhaWEVkNkpoS0pXcGhWcFdhVE1uVkFyRWNhMjdVa3FlcjRGbEtBNHpCT3ZfWVEyblMyYWxLT3JVQlRlcDVzYU4tbnIxclp6TQ?oc=5" target="_blank">What is adversarial machine learning?</a> <font color="#6f6f6f">TechTalks</font>
- Deep learning models for electrocardiograms are susceptible to adversarial attack - Nature— Nature
<a href="https://news.google.com/rss/articles/CBMiXkFVX3lxTE5nSE5ubndlRm1mT1lxUGNfT282YzR6RmhQbFcwa2lwTExFOGxLVDJzUXRTbEhianh4bGpSOHVDQjdraUJpbFBrNHlucWpDQ2E3OVJwYzhzR0FRRDFCZ3c?oc=5" target="_blank">Deep learning models for electrocardiograms are susceptible to adversarial attack</a> <font color="#6f6f6f">Nature</font>
- How Adversarial Attacks Could Destabilize Military AI Systems - IEEE Spectrum— IEEE Spectrum
<a href="https://news.google.com/rss/articles/CBMibEFVX3lxTE1Sd2JnWWxpTGxib1lKUGxiaGVnaUN4VTZtcENXaGlud2hBUlZSbUdrWmhvelFSYllTZ1ctaEMyeWZfLUwwMTN4bGk0a1pfUHpIeExkMGpwOFR3UkpMNGZkbnA5czNKRlphTlJYSNIBgAFBVV95cUxNUzM3MWt0akN0T1BUY3F4YWMwVjZ6bUpJUi1SUnlyUHRBQXdnazRFVXlUN19qY3o2ZmpES0RDUEw0S2xyMUZZcmt3Z0pxMmNvbG5MZ3drTENQRmkxSloxR0p3UXRqN1A4dDhzWnFPel8yNkREUUlVRzJPZVRKMTd5ag?oc=5" target="_blank">How Adversarial Attacks Could Destabilize Military AI Systems</a> <font color="#6f6f6f">IEEE Spectrum</font>
- Adversarially trained smooth classifiers reach provably robust accuracy - Microsoft— Microsoft
<a href="https://news.google.com/rss/articles/CBMi4AFBVV95cUxPenhwYldVNG5KUncyU2llXzVrR1JtZmROU0V2VEZ1WDJwamJzVHI3RnV4WklyeHBPYTYyWGpVUnNxVG81S0xTeW1tcnBQakxIRjFvUlZubjFYZlVnMjNCU1J4dHdvcVFqUkZpS1AwZjdyeEI1bXJuZVlHNzFPZWFSMTl4cGRGUVlEMzZNcTI2R0sxV1hUNmgtT21IQkJQUXgwLXRaOXRMYW1EU3lGU0FQdy1WZGNvU2dxTXRBb2VZdno4MXVFQktYT2lwNlppWEs2MklZb3JtZnBuRnl0YjAwLQ?oc=5" target="_blank">Adversarially trained smooth classifiers reach provably robust accuracy</a> <font color="#6f6f6f">Microsoft</font>
- Adversarial Machine Learning - Microsoft— Microsoft
<a href="https://news.google.com/rss/articles/CBMihAFBVV95cUxQZ05YUkQ4M2xoa211SEdWNWpXRjVHSmpncWRCY0VQN3liU0M3NXZmOGE0TEsxMXl2Z3VlN2xPVjVSM19KUl9iNHpHY1daUkloWWlwNTZQeFdxODNCeE9YbHJuLVNlMHlrc19HNFVFZ2xnalY4OFM2bHM1T1BUcjdmUFdVc0g?oc=5" target="_blank">Adversarial Machine Learning</a> <font color="#6f6f6f">Microsoft</font>
- Protecting smart machines from smart attacks - Princeton University— Princeton University
<a href="https://news.google.com/rss/articles/CBMiugFBVV95cUxNeEV2bVdidHNHazc0NUxrUzk0VGtiUktkMjd4SU83VkRmUTRpOVdSbFZRZWN1c095bG4tYWx4N3paWGZjN3llNWcxLXZWdWlaSkVjLThRRURhS3NtZUVEZ0JkTUQ1SVl6eHJRMEVwdm5yaW9RcGRWOWdKelZMVUdUZklKWkYxMm5Da2ZmTFJ6NngwX0swX0tvTHVzRi1GTkItUlVWcEFJX1lxR1JWY19JRFZ1MG9xVXE5UGc?oc=5" target="_blank">Protecting smart machines from smart attacks</a> <font color="#6f6f6f">Princeton University</font>
- How to tell whether machine-learning systems are robust enough for the real world - MIT News— MIT News
<a href="https://news.google.com/rss/articles/CBMiogFBVV95cUxObG5ZX0liZzBQcjUyX0RGdWY5dnZSdFBTb2gwejZrbTJseDZrcjVOOXZPUERTUm5VQmhnQ3pVdTJEb1I0bUFtYmtGR3FDcjNkUEtaTUpSZXA2cWxTVDA3MFpSRmx4bTNfVGJ3Q2dlVWhINjNRTnpTS0ZET09hbElSazNnbTdTLWxHVFNXUGZoTF9NclFqeWkxdEllYWhoZXFqLUE?oc=5" target="_blank">How to tell whether machine-learning systems are robust enough for the real world</a> <font color="#6f6f6f">MIT News</font>
- How malevolent machine learning could derail AI - MIT Technology Review— MIT Technology Review
<a href="https://news.google.com/rss/articles/CBMipgFBVV95cUxPVGJFcEtkaWVRQy1NSmF2eWxLamtyLVNGSTQ1dV9FYmJVT2NwU3VRLTg1TkZ3WEdEN3F4RllzT3lqWWstaHF0U3VEN0pDLTQ4YWxaQTRhU0F2RTdsQ0pQYjc4Mkttc3Vac0NnY3FHOGxTVjM1Umhla3AwQ3FOOWFzSWtlQjhuQzRBanFOd3JMdF91WHM3WXVqa3BObDJEU3o5VWh2VExR0gGrAUFVX3lxTE9fU05hdm44MEx5ZjRFTmFNYVRTUlN5UXd5dVhBcTZGb1lPYUtUZUlTR1FGRmFxUmNhQUtjdVNVQ3gxMU1qUGF0T2dRMFhoQmFBaTVuUERaa0tSZ3FMZzRuU2JGQlMxN0ZEV2ZOVWxPQ3g5RFVTcTNndDlqallaME5WdVdxbG91MWNtV2ltVFJ0LTJ4T2tmeEtmQTRBZFBzMjB6Q3NMeGxxWXhyWQ?oc=5" target="_blank">How malevolent machine learning could derail AI</a> <font color="#6f6f6f">MIT Technology Review</font>
- Defending Against Adversarial Artificial Intelligence - darpa.mil— darpa.mil
<a href="https://news.google.com/rss/articles/CBMickFVX3lxTE5Hc3ExSFNTcG5YbEN1MEJiUllDOVRHMERjN3A3ME9HbG9oTXdPbzJ1S3VyQlhQSHBlNF84bGNQalMzWGdwOTB4NU9GZ1dqNzU4eTVLOG5BZWJBQWllZXZaZDllRmE4aE9mMlNCNWJnZHpGQQ?oc=5" target="_blank">Defending Against Adversarial Artificial Intelligence</a> <font color="#6f6f6f">darpa.mil</font>
- Attacking machine learning with adversarial examples - OpenAI— OpenAI
<a href="https://news.google.com/rss/articles/CBMihAFBVV95cUxPOU1hQVZlcnE3T1pkNEUxNXNSVWtWOWlrUExXd1dxN1kzejFEMkpBWjVZclp1LWN4cXRmSEdibExESmp3clNOeUlVWG5kaGI5MERaVFBjU3BqWk50dkRoSy0tWE1XRDBOS3QtQ1dvOWNoSm5zX3R2d3AxaVVyc2FxTkpkUTU?oc=5" target="_blank">Attacking machine learning with adversarial examples</a> <font color="#6f6f6f">OpenAI</font>