One Article Review

Accueil - L'article:
Source AlienVault.webp AlienVault Lab Blog
Identifiant 8379621
Date de publication 2023-09-06 10:00:00 (vue: 2023-09-06 13:08:05)
Titre Garder les réglementations de cybersécurité en tête pour une utilisation génératrice de l'IA
Keeping cybersecurity regulations top of mind for generative AI use
Texte The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  Can businesses stay compliant with security regulations while using generative AI? It’s an important question to consider as more businesses begin implementing this technology. What security risks are associated with generative AI? It\'s important to earn how businesses can navigate these risks to comply with cybersecurity regulations. Generative AI cybersecurity risks There are several cybersecurity risks associated with generative AI, which may pose a challenge for staying compliant with regulations. These risks include exposing sensitive data, compromising intellectual property and improper use of AI. Risk of improper use One of the top applications for generative AI models is assisting in programming through tasks like debugging code. Leading generative AI models can even write original code. Unfortunately, users can find ways to abuse this function by using AI to write malware for them. For instance, one security researcher got ChatGPT to write polymorphic malware, despite protections intended to prevent this kind of application. Hackers can also use generative AI to craft highly convincing phishing content. Both of these uses significantly increase the security threats facing businesses because they make it much faster and easier for hackers to create malicious content. Risk of data and IP exposure Generative AI algorithms are developed with machine learning, so they learn from every interaction they have. Every prompt becomes part of the algorithm and informs future output. As a result, the AI may “remember” any information a user includes in their prompts. Generative AI can also put a business’s intellectual property at risk. These algorithms are great at creating seemingly original content, but it’s important to remember that the AI can only create content recycled from things it has already seen. Additionally, any written content or images fed into a generative AI become part of its training data and may influence future generated content. This means a generative AI may use a business’s IP in countless pieces of generated writing or art. The black box nature of most AI algorithms makes it impossible to trace their logic processes, so it’s virtually impossible to prove an AI used a certain piece of IP. Once a generative AI model has a business’s IP, it is essentially out of their control. Risk of compromised training data One cybersecurity risk unique to AI is “poisoned” training datasets. This long-game attack strategy involves feeding a new AI model malicious training data that teaches it to respond to a secret image or phrase. Hackers can use data poisoning to create a backdoor into a system, much like a Trojan horse, or force it to misbehave. Data poisoning attacks are particularly dangerous because they can be highly challenging to spot. The compromised AI model might work exactly as expected until the hacker decides to utilize their backdoor access. Using generative AI within security regulations While generative AI has some cybersecurity risks, it is possible to use it effectively while complying with regulations. Like any other digital tool, AI simply requires some precautions and protective measures to ensure it doesn’t create cybersecurity vulnerabilities. A few essential steps can help businesses accomplish this. Understand all relevant regulations Staying compli
Envoyé Oui
Condensat 2013 26000 abnormal about abuse access accomplish additionally adopt agree ai’s algorithm algorithms all allowing already also any application applications are aren’t art article assisting associated at&t attack attacks author autonomously backdoor base because become becomes before begin behavior behavioral big black both box breach business business’s businesses but can can’t cases certain challenge challenging chatgpt checking clear code coming compliance compliant comply complying compromised compromising confidently connected connections consider content continuous contracts control convincing copying copyrighted countless craft create creates creating customers’ cybersecurity dangerous data datasets debugging decides definitely despite detect detecting developed difficult digital directly does doesn’t earn easier effectively efficiency efficiently emerging employees endorse ensure ensuring essential essentially ethical even every everything exactly example expected exposed exposing exposure extremely facing fact fairly faster fed feeding find force found framework from function future game general generated generative glitches going good got great guidelines hacker hackers has have help helpful helping highlight highly horse how idea image images impact implement implementing important impossible improper include included includes increase increases influence information informs input inquire instance intellectual intend intended interaction involves iso issue issues it’s its keeping kind leading learn learning legally leveraging like likelihood limitations logic long luckily machine make makes malicious malware manufacturing many map material may means measures might million mind minimum misbehave model models monitor monitoring more most much nature navigate navigating need new non not odd often once one only original other out outline outlines outlining output part particularly partners party pattern permission personal phishing phrase piece pieces plan play poisoning polymorphic pose poses positions possible post potential powered powerful precautions prevent prioritize process processes products program programming programs prohibited prompt prompts property protect protections protective prove provided put question recognition recycled regulation regulations related relevant remember requires researcher respond responsibility result right risk risks safely safety secret secure security seemingly seen sensitive serious several sharing should show significantly signing signs similar simply social society solely some sourced specific spot spotting standard standards stay staying steps strategy sure system take target tasks teaches tech technology them these things third thorough those threats through tool top touch trace training trojan trying understand understanding unfortunately unique until unusual usage use used user users uses using utilize valuable vendor’s vendors verify views virtually visually vital vulnerabilities vulnerable ways what when which within without work write writing written
Tags Malware Tool
Stories ChatGPT ChatGPT
Notes ★★
Move


L'article ne semble pas avoir été repris aprés sa publication.


L'article ne semble pas avoir été repris sur un précédent.
My email: