What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
silicon.fr.webp 2023-07-18 13:39:16 ChatGPT risque-t-il de " privatiser " la connaissance ? (lien direct) Une étude universitaire pose la question de l'accaparement de la connaissance par les LLM, au premier rang desquels ChatGPT. ChatGPT ChatGPT ★★★
DarkReading.webp 2023-07-17 22:04:00 Comment le renseignement des menaces AI-Augmentation résout les déficits de sécurité
How AI-Augmented Threat Intelligence Solves Security Shortfalls
(lien direct)
Les chercheurs explorent comment les cyber-analystes surchargés peuvent améliorer leurs travaux de renseignement sur les menaces en utilisant des modèles de langage grand type ChatGPT (LLMS).
Researchers explore how overburdened cyber analysts can improve their threat intelligence jobs by using ChatGPT-like large language models (LLMs).
Threat ChatGPT ★★★
RecordedFuture.webp 2023-07-17 19:49:00 Par les criminels, pour les criminels: l'outil AI génère facilement des e-mails de fraude remarquablement persuasifs \\ '
By criminals, for criminals: AI tool easily generates \\'remarkably persuasive\\' fraud emails
(lien direct)
Un outil d'intelligence artificielle promu sur les forums souterrains montre comment l'IA peut aider à affiner les opérations de cybercriminalité, selon les chercheurs.Le logiciel de Wormpt est offert «comme alternative Blackhat» aux outils d'IA commerciaux comme Chatgpt, Selon les analystes à la société de sécurité par e-mail Slashnext.Les chercheurs ont utilisé Wormpt pour générer le type de contenu qui pourrait faire partie
An artificial intelligence tool promoted on underground forums shows how AI can help refine cybercrime operations, researchers say. The WormGPT software is offered “as a blackhat alternative” to commercial AI tools like ChatGPT, according to analysts at email security company SlashNext. The researchers used WormGPT to generate the kind of content that could be part
Tool ChatGPT ★★★
globalsecuritymag.webp 2023-07-17 09:29:02 Chatgpt porte une variété d'implications juridiques et de conformité sur le lieu de travail, explique les oxylab
ChatGPT carries a variety of legal and compliance implications in the workplace, says Oxylabs
(lien direct)
Chatgpt comporte une variété d'implications juridiques et de conformité sur le lieu de travail, explique oxylabs Les organisations interdisent aux employés d'utiliser le contenu généré avec Chatgpt alors que les problèmes de confidentialité et de sécurité des données continuent de surmonter. - opinion
ChatGPT carries a variety of legal and compliance implications in the workplace, says Oxylabs Organisations are banning employees from using content generated with ChatGPT as data privacy and security issues continue to surge. - Opinion
ChatGPT ChatGPT ★★★
globalsecuritymag.webp 2023-07-13 14:21:53 CheckMarx a annoncé son plugin Checkai pour Chatgpt
Checkmarx announced its CheckAI Plugin for ChatGPT
(lien direct)
CheckMarx annonce un plugin CHECKAI révolutionnaire pour ChatGpt pour détecter et prévenir les attaques contre le code généré par ChatGpt CheckMarx \\ 'Le plugin AI de l'AI de l'industrie AI fonctionne dans l'interface ChatGPT pour protéger contre les nouveaux types d'attaque ciblant le code généré de Genai - revues de produits
Checkmarx Announces Groundbreaking CheckAI Plugin for ChatGPT to Detect and Prevent Attacks Against ChatGPT-Generated Code Checkmarx\' industry-first AI AppSec plugin works within the ChatGPT interface to protect against new attack types targeting GenAI-generated code - Product Reviews
ChatGPT ChatGPT ★★
Checkpoint.webp 2023-07-13 14:00:37 Le logiciel de point de contrôle empêche les violations potentielles de données de chatte et de barde
Check Point Software Prevents Potential ChatGPT and Bard data breaches
(lien direct)
> Empêcher la fuite de données sensibles et confidentielles lors de l'utilisation d'évaluation générative des risques de sécurité des applications AI comme toutes les nouvelles technologies, Chatgpt, Google Bard, le chat Microsoft Bing et d'autres services génératifs d'IA sont livrés avec des compromis classiques, y compris l'innovation et les gains de productivitépar rapport à la sécurité et à la sécurité des données.Heureusement, il existe des mesures simples que les organisations peuvent prendre pour se protéger immédiatement contre la fuite accidentelle d'informations confidentielles ou sensibles par les employés utilisant ces nouvelles applications.L'utilisation de grands modèles de langue (LLMS) comme Chatgpt est devenue très populaire parmi les employés, par exemple lorsque vous recherchez des conseils rapides ou une aide au développement de code logiciel.Malheureusement, [& # 8230;]
>Preventing leakage of sensitive and confidential data when using Generative AI apps Security Risk Assessment Like all new technologies, ChatGPT, Google Bard, Microsoft Bing Chat, and other Generative AI services come with classic trade-offs, including innovation and productivity gains vs. data security and safety.  Fortunately, there are simple measures that organizations can take to immediately protect themselves against inadvertent leak of confidential or sensitive information by employees using these new apps. The use of Large Language Models (LLMs) like ChatGPT has become very popular among employees, for example when looking for quick guidance or help with software code development. Unfortunately, […]
ChatGPT ChatGPT ★★
SlashNext.webp 2023-07-13 13:00:49 Wormgpt & # 8211;Les cybercriminels d'outils d'IA génératifs utilisent pour lancer des attaques de compromis par e-mail commercial
WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks
(lien direct)
> Dans cet article de blog, nous nous plongeons dans l'utilisation émergente de l'IA générative, y compris le chatppt d'Openai, et l'outil de cybercriminalité Wormpt, dans les attaques de compromis par courrier électronique (BEC).Soulignant de vrais cas des forums de cybercriminalité, le post plonge dans la mécanique de ces attaques, les risques inhérents posés par les e-mails de phishing dirigés par l'IA et les avantages uniques de [& # 8230;] The Post wormgpt & #8211;Les cybercriminels d'outils d'IA génératifs utilisent pour lancer des attaques de compromis par courrier électronique commercial apparu pour la première fois sur slashnext .
>In this blog post, we delve into the emerging use of generative AI, including OpenAI’s ChatGPT, and the cybercrime tool WormGPT, in Business Email Compromise (BEC) attacks. Highlighting real cases from cybercrime forums, the post dives into the mechanics of these attacks, the inherent risks posed by AI-driven phishing emails, and the unique advantages of […] The post WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks first appeared on SlashNext.
Tool ChatGPT ★★★
globalsecuritymag.webp 2023-07-13 10:00:00 Vérifier l'analyse de la sécurité de la recherche sur les préoccupations concernant les limites de Google Bard \\
Check Point Research\\'s security analysis spurs concerns over Google Bard\\'s limitations
(lien direct)
Avec la série de plates-formes intelligentes de l'IA génératrices émergentes, Check Point Research (RCR) a récemment effectué une analyse de sécurité de la dernière, Google Bard, la comparant à Chatgpt.Ce que la RCR a trouvé, ce sont plusieurs limitations de sécurité où la plate-forme autorise les efforts malveillants des cybercriminels.Après plusieurs cycles d'analyse, la RCR a pu générer: - mise à jour malveillant
With the slew of Generative AI intelligent platforms emerging, Check Point Research (CPR) recently conducted a security analysis of the latest one, Google Bard, comparing it against ChatGPT. What CPR found was several security limitations where the platform permits cybercriminals\' malicious efforts. After several rounds of analysis, CPR were able to generate: - Malware Update
ChatGPT ★★
silicon.fr.webp 2023-07-11 15:12:39 ChatGPT pour l\'ITOps : le cas Younited (lien direct) La fintech Younited a expérimenté l'usage de plusieurs modèles GPT pour automatiser l'analyse des causes premières. ChatGPT ChatGPT ★★
Chercheur.webp 2023-07-07 11:11:09 Le dividende de l'IA
The AI Dividend
(lien direct)
Pendant quatre décennies, les Alaska ont ouvert leurs boîtes aux lettres pour trouver des chèques qui les attendent, leur coupe de l'or noir sous leurs pieds.Il s'agit d'un fonds permanent de l'Alaska, financé par les revenus pétroliers de l'État et payé à chaque Alaska chaque année.Nous & # 8217; re maintenant dans une autre sorte de ruée vers les ressources, avec des entreprises colporte des bits au lieu du pétrole: AI génératif. Tout le monde parle de ces nouvelles technologies d'IA & # 8212; comme les sociétés de Chatgpt & # 8212; et l'IA vantent leur puissance impressionnante.Mais ils ne parlent pas de la façon dont ce pouvoir vient de nous tous.Sans tous nos écrits et photos que les entreprises d'IA utilisent pour former leurs modèles, ils n'auraient rien à vendre.Les grandes entreprises technologiques prennent actuellement le travail du peuple américain, à notre insu et à notre consentement, sans l'octroi de l'octroi de licences, et empoche le produit ...
For four decades, Alaskans have opened their mailboxes to find checks waiting for them, their cut of the black gold beneath their feet. This is Alaska’s Permanent Fund, funded by the state’s oil revenues and paid to every Alaskan each year. We’re now in a different sort of resource rush, with companies peddling bits instead of oil: generative AI. Everyone is talking about these new AI technologies—like ChatGPT—and AI companies are touting their awesome power. But they aren’t talking about how that power comes from all of us. Without all of our writings and photos that AI companies are using to train their models, they would have nothing to sell. Big Tech companies are currently taking the work of the American people, without our knowledge and consent, without licensing it, and are pocketing the proceeds...
ChatGPT ★★
AlienVault.webp 2023-07-06 10:00:00 Chatgpt, le nouveau canard en caoutchouc
ChatGPT, the new rubber duck
(lien direct)
Introduction Whether you are new to the world of IT or an experienced developer, you may have heard of the debugging concept of the \'programmer\'s rubber duck’. For the uninitiated, the basic concept is that by speaking to an inanimate object (e.g., a rubber duck) and explaining one’s code or the problem you are facing as if you were teaching it, you can solve whatever roadblock you’ve hit. Talking to the duck may lead to a “eureka!” moment where you suddenly discover the issue that has been holding you back, or simply allow you to clarify your thoughts and potentially gain a new perspective by taking a short break. This works because as you are “teaching” the duck, you must break down your code step by step, explaining how it works and what each part does. This careful review not only changes how you think about the described scenario but also highlights flaws you may not have otherwise identified. Since the rubber duck is an inanimate object, it will never tire or become disinterested during these conversations. Understandably, this also means that the duck cannot provide you any actual support. It won’t be able to help you summarize your ideas, offer recommendations, point out flaws in syntax or programming logic. Enter now the tool taking the world by storm, ChatGPT. Even at its most basic tier ChatGPT offers incredible value for those who learn how to work with it. This tool combines in one package all the benefits of the rubber duck, patience, reliability, support, while also being able to offer suggestions. While it provides the patience and reliability of the classic \'rubber duck\', ChatGPT also has the ability to offer helpful suggestions, review code snippets*, and engage in insightful dialogue. ChatGPT has the opportunity to significantly speed up development practices and virtually eliminate any form of “coders-block” without needing any complex setup or advanced knowledge to use effectively. The tool can also remove many barriers to entry that exist in programming, effectively democratizing the entire development pipeline and opening it up to anyone with a computer. The premise of a rubber duck extends beyond the realm of programming. Individuals across various professions who require an intuitive, extensively trained AI tool can benefit from ChatGPT – this modern interpretation of the \'rubber duck\' – in managing their day-to-day tasks. *This is highly dependent on your use-case. You should never upload sensitive, private, or proprietary information into ChatGPT, or information that is otherwise controlled or protected. Benefits ChatGPT offers numerous benefits for those willing to devote the time to learning how to use it effectively. Some of its key benefits include: Collaborative problem-solving Ability to significantly reduce time spent on manual tasks Flexibility Ease of use Drawbacks The tool does come with a few drawbacks, however, which are worth considering before you dive into the depths of what it can offer. To begin with, the tool is heavily reliant on the user to provide a clear and effective prompt. If provided a weak or vague prompt it is highly likely that the tool will provide similar results. Another drawback that may catch its users by surprise is that not a replacement for human creativity or ingenuity. You cannot, thus far, solely rely on the tool to fully execute a program or build something entirely from scratch without the support of a human to guide and correct its output. Suggestions Although ChatGPT is a fantastic tool I recognize that using it can be overwhelming at first, especially if you are not used to using it. ChatGPT has so many capabilities it is often difficult to determine how best to use it. Below are a few suggestions and examples of how this tool can be used to help talk through problems or discuss ideas, regardless of whether you&r Tool ChatGPT ChatGPT ★★
Chercheur.webp 2023-07-05 11:14:57 Course en cours de réaction pour gratter les données sans autorisation
Class-Action Lawsuit for Scraping Data without Permission
(lien direct)
J'ai des sentiments mitigés à ce sujet Trui fait en justice en classe contre Openai et Microsoft, affirmant qu'il a gratté 300 milliards de mots d'Internet & # 8221;sans s'inscrire en tant que courtier de données ni obtenir le consentement.D'une part, je veux que ce soit une utilisation équitable des données publiques protégées.D'un autre côté, je veux que nous est compensé Pour notre capacité humaine unique à générer un langage. il y a une ride intéressante à ce sujet.A document récent a montré que l'utilisation de texte généré par l'IA pour former une autre AI & # 8220; provoque des défauts irréversibles. & # 8221;À partir d'un ...
I have mixed feelings about this class-action lawsuit against OpenAI and Microsoft, claiming that it “scraped 300 billion words from the internet” without either registering as a data broker or obtaining consent. On the one hand, I want this to be a protected fair use of public data. On the other hand, I want us all to be compensated for our uniquely human ability to generate language. There’s an interesting wrinkle on this. A recent paper showed that using AI generated text to train another AI invariably “causes irreversible defects.” From a ...
ChatGPT ★★
Trend.webp 2023-07-05 00:00:00 Chatgpt Liens partagés et protection de l'information: les risques et mesures les organisations doivent comprendre
ChatGPT Shared Links and Information Protection: Risks and Measures Organizations Must Understand
(lien direct)
Depuis sa version initiale à la fin de 2022, l'outil de génération de texte propulsé par l'IA connue sous le nom de chatppt a connu des taux d'adoption rapide des organisations et des utilisateurs individuels.Cependant, sa dernière fonctionnalité, connue sous le nom de liens partagés, comporte le risque potentiel de divulgation involontaire des informations confidentielles.
Since its initial release in late 2022, the AI-powered text generation tool known as ChatGPT has been experiencing rapid adoption rates from both organizations and individual users. However, its latest feature, known as Shared Links, comes with the potential risk of unintentional disclosure of confidential information.
Tool ChatGPT ChatGPT ★★
silicon.fr.webp 2023-07-04 15:37:47 IA : Harvard forme un bot de type ChatGPT (lien direct) Harvard propose à ses étudiants en programmation une intelligence artificielle similaire à ChatGPT pour relever certains défis. ChatGPT ChatGPT ★★
silicon.fr.webp 2023-06-29 15:34:17 OpenAI, créateur de ChatGPT, poursuivi en justice (lien direct) Microsoft et OpenAI (ChatGPT) ont-ils collecté et partagé des données sans se soucier de consentement ? Une plainte a été déposée à ce sujet aux Etats-Unis. ChatGPT ★★★
Netskope.webp 2023-06-29 15:00:00 Financial Services dirige le pack pour placer des contrôles autour de chatppt
Financial Services is Leading the Pack in Placing Controls Around ChatGPT
(lien direct)
> L'utilisation de ChatGpt augmente de façon exponentielle dans l'entreprise, où les utilisateurs soumettent des informations sensibles au bot de chat, y compris le code source propriétaire, les mots de passe et les clés, la propriété intellectuelle et les données réglementées.En réponse, les organisations ont mis en place des contrôles pour limiter l'utilisation de Chatgpt. & # 160;Les services financiers mènent le pack, avec près d'une organisation sur quatre [& # 8230;]
>ChatGPT use is increasing exponentially in the enterprise, where users are submitting sensitive information to the chat bot, including proprietary source code, passwords and keys, intellectual property, and regulated data. In response, organizations have put controls in place to limit the use of ChatGPT.  Financial services leads the pack, with nearly one in four organizations […]
ChatGPT ChatGPT ★★
DarkReading.webp 2023-06-29 14:00:00 En ce qui concerne le codage sécurisé, le chatppt est typiquement humain
When It Comes to Secure Coding, ChatGPT Is Quintessentially Human
(lien direct)
Nous ne sommes toujours pas préparés à lutter contre les bogues de sécurité que nous rencontrons déjà, sans parler de nouveaux problèmes d'origine AI.
We\'re still unprepared to fight the security bugs we already encounter, let alone new AI-borne issues.
ChatGPT ChatGPT ★★
ComputerWeekly.webp 2023-06-29 10:52:00 Navigation de la cybersécurité sous le chatppt
Navigating cyber security under ChatGPT
(lien direct)
Nous ne sommes toujours pas préparés à lutter contre les bogues de sécurité que nous rencontrons déjà, sans parler de nouveaux problèmes d'origine AI.
We\'re still unprepared to fight the security bugs we already encounter, let alone new AI-borne issues.
ChatGPT ChatGPT ★★
DarkReading.webp 2023-06-28 21:17:00 6 façons dont la cybersécurité est intestin
6 Ways Cybersecurity is Gut-Checking the ChatGPT Frenzy
(lien direct)
Les chatbots d'IA génératifs comme Chatgpt sont les plus bourdonnants des bourdonnements en ce moment, mais la cyber communauté commence à mûrir lorsqu'il s'agit d'évaluer où il devrait s'intégrer dans nos vies.
Generative AI chatbots like ChatGPT are the buzziest of the buzzy right now, but the cyber community is starting to mature when it comes to assessing where it should fit into our lives.
ChatGPT ChatGPT ★★
DarkReading.webp 2023-06-28 17:05:00 L'enquête MalwareBytes Chatgpt révèle que 81% sont préoccupés par les risques génératifs de sécurité de l'IA
Malwarebytes ChatGPT Survey Reveals 81% are Concerned by Generative AI Security Risks
(lien direct)
L'enquête révèle également 63% des répondants se méfient de Chatgpt tandis que 51% remettent en question la capacité de l'IA \\ à améliorer la sécurité sur Internet.
Survey also uncovers 63% of respondents distrust ChatGPT while 51% question AI\'s ability to improve Internet safety.
ChatGPT ChatGPT ★★★
DarkReading.webp 2023-06-28 16:08:00 Les projets génératifs d'IA présentent un risque de cybersécurité majeur pour les entreprises
Generative AI Projects Pose Major Cybersecurity Risk to Enterprises
(lien direct)
L'enthousiasme des développeurs \\ 'pour Chatgpt et d'autres outils LLM laisse la plupart des organisations sans rapport pour se défendre contre les vulnérabilités que la technologie naissante crée.
Developers\' enthusiasm for ChatGPT and other LLM tools leaves most organizations largely unprepared to defend against the vulnerabilities that the nascent technology creates.
ChatGPT ChatGPT ★★
knowbe4.webp 2023-06-27 13:00:00 Cyberheistnews Vol 13 # 26 [Eyes Open] La FTC révèle les cinq dernières escroqueries par SMS
CyberheistNews Vol 13 #26 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams
(lien direct)
CyberheistNews Vol 13 #26 CyberheistNews Vol 13 #26  |   June 27th, 2023 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams The U.S. Federal Trade Commission (FTC) has published a data spotlight outlining the most common text message scams. Phony bank fraud prevention alerts were the most common type of text scam last year. "Reports about texts impersonating banks are up nearly tenfold since 2019 with median reported individual losses of $3,000 last year," the report says. These are the top five text scams reported by the FTC: Copycat bank fraud prevention alerts Bogus "gifts" that can cost you Fake package delivery problems Phony job offers Not-really-from-Amazon security alerts "People get a text supposedly from a bank asking them to call a number ASAP about suspicious activity or to reply YES or NO to verify whether a transaction was authorized. If they reply, they\'ll get a call from a phony \'fraud department\' claiming they want to \'help get your money back.\' What they really want to do is make unauthorized transfers. "What\'s more, they may ask for personal information like Social Security numbers, setting people up for possible identity theft." Fake gift card offers took second place, followed by phony package delivery problems. "Scammers understand how our shopping habits have changed and have updated their sleazy tactics accordingly," the FTC says. "People may get a text pretending to be from the U.S. Postal Service, FedEx, or UPS claiming there\'s a problem with a delivery. "The text links to a convincing-looking – but utterly bogus – website that asks for a credit card number to cover a small \'redelivery fee.\'" Scammers also target job seekers with bogus job offers in an attempt to steal their money and personal information. "With workplaces in transition, some scammers are using texts to perpetrate old-school forms of fraud – for example, fake \'mystery shopper\' jobs or bogus money-making offers for driving around with cars wrapped in ads," the report says. "Other texts target people who post their resumes on employment websites. They claim to offer jobs and even send job seekers checks, usually with instructions to send some of the money to a different address for materials, training, or the like. By the time the check bounces, the person\'s money – and the phony \'employer\' – are long gone." Finally, scammers impersonate Amazon and send fake security alerts to trick victims into sending money. "People may get what looks like a message from \'Amazon,\' asking to verify a big-ticket order they didn\'t place," the FTC says. "Concerned Ransomware Spam Malware Hack Tool Threat FedEx APT 28 APT 15 ChatGPT ChatGPT ★★
Checkpoint.webp 2023-06-26 08:01:15 Breaking GPT-4 Bad: Check Point Research expose comment les limites de sécurité peuvent être violées alors que les machines luttent avec des conflits intérieurs
Breaking GPT-4 Bad: Check Point Research Exposes How Security Boundaries Can Be Breached as Machines Wrestle with Inner Conflicts
(lien direct)
> Souleveillance La recherche sur les points de contrôle examine les aspects de sécurité et de sécurité du GPT-4 et révèle comment les limitations peuvent être contournées par les chercheurs présentent un nouveau mécanisme surnommé «Boule Binding», en collision des motivations internes de GPT-4S contre elle-même, nos chercheurs ont pu gagnerLa recette de médicaments illégaux de GPT-4 malgré le refus antérieur du moteur de fournir l'attention de ces informations sur la recherche de vérification des antécédents (RCR), l'attention a récemment été captivée par Chatgpt, un modèle avancé de grande langue (LLM) développé par Openai.Les capacités de ce modèle d'IA ont atteint un niveau sans précédent, démontrant jusqu'où le terrain est arrivé.Ce modèle de langage hautement sophistiqué qui [& # 8230;]
>Highlights Check Point Research examines security and safety aspects of GPT-4 and reveals how limitations can be bypassed Researchers present a new mechanism dubbed “double bind bypass”, colliding GPT-4s internal motivations against itself Our researchers were able to gain illegal drug recipe from GPT-4 despite earlier refusal of the engine to provide this information Background Check Point Research (CPR) team’s attention has recently been captivated by ChatGPT, an advanced Large Language Model (LLM) developed by OpenAI. The capabilities of this AI model have reached an unprecedented level, demonstrating how far the field has come. This highly sophisticated language model that […]
ChatGPT ★★
RecordedFuture.webp 2023-06-23 13:12:00 Le fabricant de chatppt \\ a plus de 4 500 pirates à la recherche de bogues
ChatGPT\\'s maker has over 4,500 hackers looking for bugs
(lien direct)
Les primes de bogues sont devenues si omniprésentes ces dernières années que de nouveaux programmes ne sont souvent remarqués que par les chercheurs en sécurité qui y participent régulièrement.Mais quand Openai a annoncé son propre concours en avril, il a fait la une des journaux.Le fabricant de ChatGpt et d'autres applications d'intelligence artificielle a embauché la plate-forme BugCrowd pour organiser les pirates de white-hat
Bug bounties have become so pervasive in recent years that new programs often are noticed only by the security researchers who regularly participate in them. But when OpenAI announced its own contest in April, it drew headlines. The much-buzzed-about maker of ChatGPT and other artificial intelligence applications hired the Bugcrowd platform to organize white-hat hackers
ChatGPT ChatGPT ★★★
The_Hackers_News.webp 2023-06-22 18:45:00 Applications génératives-AI et Chatgpt: risques potentiels et stratégies d'atténuation
Generative-AI apps & ChatGPT: Potential risks and mitigation strategies
(lien direct)
Perdre le sommeil sur des applications génératives-AI?Vous n'êtes pas seul ou mal.Selon le Astrix Security Research Group, les organisations de taille moyenne ont déjà, en moyenne, 54 intégrations génératives-AI vers des systèmes de base comme Slack, GitHub et Google Workspace et ce nombre ne devrait que croître.Continuez à lire pour comprendre les risques potentiels et comment les minimiser. Réservez un générateur-ai
Losing sleep over Generative-AI apps? You\'re not alone or wrong. According to the Astrix Security Research Group, mid size organizations already have, on average, 54 Generative-AI integrations to core systems like Slack, GitHub and Google Workspace and this number is only expected to grow. Continue reading to understand the potential risks and how to minimize them.  Book a Generative-AI
ChatGPT ★★
Netskope.webp 2023-06-22 17:00:00 Les utilisateurs d'entreprise envoient des données sensibles à Chatgpt
Enterprise Users Sending Sensitive Data to ChatGPT
(lien direct)
> L'utilisation de Chatgpt augmente de façon exponentielle parmi les utilisateurs d'entreprise, qui l'utilisent pour aider avec le processus d'écriture, explorer de nouveaux sujets et écrire du code.Mais, les utilisateurs doivent faire attention aux informations qu'ils soumettent à Chatgpt, car ChatGpt ne garantit pas la sécurité ou la confidentialité des données.Les utilisateurs doivent éviter de soumettre des informations sensibles, [& # 8230;]
>ChatGPT use is increasing exponentially among enterprise users, who are using it to help with the writing process, to explore new topics, and to write code. But, users need to be careful about what information they submit to ChatGPT, because ChatGPT does not guarantee data security or confidentiality. Users should avoid submitting any sensitive information, […]
ChatGPT ChatGPT ★★
InfoSecurityMag.webp 2023-06-22 16:45:00 #InfosecurityEurope: les experts mettent en évidence les techniques d'attaque en évolution
#InfosecurityEurope: Experts Highlight Evolving Attack Techniques
(lien direct)
Les experts ont discuté de l'utilisation croissante de Chatgpt par les acteurs de la menace et de l'évolution des attaques basées sur l'identité
Experts discussed growing utilization of ChatGPT by threat actors and evolving identity-based attacks
Threat ChatGPT ChatGPT ★★
Chercheur.webp 2023-06-22 15:43:36 AI comme création de sens pour les commentaires du public
AI as Sensemaking for Public Comments
(lien direct)
il est devenu à la mode pour considérer l'intelligence artificielle comme un intrinsèquement La technologie déshumanisante , une force d'automatisation qui a déclenché des légions de travailleurs de compétence virtuels sous forme sans visage.Mais que se passe-t-il si l'IA se révèle être le seul outil capable d'identifier ce qui rend vos idées spéciales, reconnaissant votre perspective et votre potentiel uniques sur les questions où cela compte le plus? vous êtes pardonné si vous êtes en train de vous déranger de la capacité de la société à lutter contre cette nouvelle technologie.Jusqu'à présent, il ne manque pas de PROGNOSTICATIONS à propos de démocratique ...
It’s become fashionable to think of artificial intelligence as an inherently dehumanizing technology, a ruthless force of automation that has unleashed legions of virtual skilled laborers in faceless form. But what if AI turns out to be the one tool able to identify what makes your ideas special, recognizing your unique perspective and potential on the issues where it matters most? You’d be forgiven if you’re distraught about society’s ability to grapple with this new technology. So far, there’s no lack of prognostications about the democratic...
Tool ChatGPT ★★
globalsecuritymag.webp 2023-06-22 11:43:18 ChatGPT : plus de 100 000 comptes volés par un logiciel malveillant (lien direct) ChatGPT : plus de 100 000 comptes volés par un logiciel malveillant Benoit Grunemwald - Expert en Cybersécurité chez ESET France réagit - Malwares ChatGPT ChatGPT ★★
AlienVault.webp 2023-06-21 10:00:00 Vers un SOC plus résilient: la puissance de l'apprentissage automatique
Toward a more resilient SOC: the power of machine learning
(lien direct)
A way to manage too much data To protect the business, security teams need to be able to detect and respond to threats fast. The problem is the average organization generates massive amounts of data every day. Information floods into the Security Operations Center (SOC) from network tools, security tools, cloud services, threat intelligence feeds, and other sources. Reviewing and analyzing all this data in a reasonable amount of time has become a task that is well beyond the scope of human efforts. AI-powered tools are changing the way security teams operate. Machine learning (which is a subset of artificial intelligence, or “AI”)—and in particular, machine learning-powered predictive analytics—are enhancing threat detection and response in the SOC by providing an automated way to quickly analyze and prioritize alerts. Machine learning in threat detection So, what is machine learning (ML)? In simple terms, it is a machine\'s ability to automate a learning process so it can perform tasks or solve problems without specifically being told do so. Or, as AI pioneer Arthur Samuel put it, “. . . to learn without explicitly being programmed.” ML algorithms are fed large amounts of data that they parse and learn from so they can make informed predictions on outcomes in new data. Their predictions improve with “training”–the more data an ML algorithm is fed, the more it learns, and thus the more accurate its baseline models become. While ML is used for various real-world purposes, one of its primary use cases in threat detection is to automate identification of anomalous behavior. The ML model categories most commonly used for these detections are: Supervised models learn by example, applying knowledge gained from existing labeled datasets and desired outcomes to new data. For example, a supervised ML model can learn to recognize malware. It does this by analyzing data associated with known malware traffic to learn how it deviates from what is considered normal. It can then apply this knowledge to recognize the same patterns in new data. ChatGPT and transformersUnsupervised models do not rely on labels but instead identify structure, relationships, and patterns in unlabeled datasets. They then use this knowledge to detect abnormalities or changes in behavior. For example: an unsupervised ML model can observe traffic on a network over a period of time, continuously learning (based on patterns in the data) what is “normal” behavior, and then investigating deviations, i.e., anomalous behavior. Large language models (LLMs), such as ChatGPT, are a type of generative AI that use unsupervised learning. They train by ingesting massive amounts of unlabeled text data. Not only can LLMs analyze syntax to find connections and patterns between words, but they can also analyze semantics. This means they can understand context and interpret meaning in existing data in order to create new content. Finally, reinforcement models, which more closely mimic human learning, are not given labeled inputs or outputs but instead learn and perfect strategies through trial and error. With ML, as with any data analysis tools, the accuracy of the output depends critically on the quality and breadth of the data set that is used as an input. types of machine learning A valuable tool for the SOC The SOC needs to be resilient in the face of an ever-changing threat landscape. Analysts have to be able to quickly understand which alerts to prioritize and which to ignore. Machine learning helps optimize security operations by making threat detection and response faster and more accurate. Malware Tool Threat Prediction Cloud ChatGPT ★★
DarkReading.webp 2023-06-20 20:59:00 NetSkope permet l'utilisation de l'entreprise sécurisée des applications de chat de chatte et d'intelligence générale
Netskope Enables Secure Enterprise Use of ChatGPT and Generative AI Applications
(lien direct)
Utilisation de Chatgpt augmentant de 25% par mois dans les entreprises, ce qui a incité des décisions clés à bloquer ou à activer en fonction de la sécurité et des problèmes de productivité.
ChatGPT usage growing 25% monthly in enterprises, prompting key decisions to block or enable based on security, productivity concerns.
ChatGPT ChatGPT ★★
DarkReading.webp 2023-06-20 20:48:00 100k + appareils infectés de fuite des comptes de chatpt au web sombre
100K+ Infected Devices Leak ChatGPT Accounts to the Dark Web
(lien direct)
Les infostelleurs sont aussi vivants que jamais, balayant sans aucun doute toutes les données commerciales qui pourraient être utilisées aux cybercriminels, y compris les informations d'identification OpenAI.
Infostealers are as alive as ever, wantonly sweeping up whatever business data might be of use to cybercriminals, including OpenAI credentials.
ChatGPT ChatGPT ★★
InfoSecurityMag.webp 2023-06-20 15:30:00 Plus de 100 000 comptes Chatgpt trouvés sur les marchés Web Dark
Over 100,000 ChatGPT Accounts Found in Dark Web Marketplaces
(lien direct)
La découverte a été faite par la société de cybersécurité basée à Singapour, Group-IB.
The discovery was made by Singapore-based cybersecurity firm Group-IB.
ChatGPT ChatGPT ★★
The_Hackers_News.webp 2023-06-20 13:42:00 Plus de 100 000 informations d'identification de compte Chatgpt volées vendues sur les marchés Web Dark
Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces
(lien direct)
Plus de 100 000 informations d'identification compromises du compte Openai Chatgpt ont trouvé leur chemin sur les marchés Web illicites Dark entre juin 2022 et mai 2023, l'Inde seule représentant 12 632 références volées. Les informations d'identification ont été découvertes dans les journaux des voleurs d'informations mis à disposition à la vente sur la cybercriminalité Underground, a déclaré le groupe-IB dans un rapport partagé avec le Hacker News. "Le nombre de
Over 100,000 compromised OpenAI ChatGPT account credentials have found their way on illicit dark web marketplaces between June 2022 and May 2023, with India alone accounting for 12,632 stolen credentials. The credentials were discovered within information stealer logs made available for sale on the cybercrime underground, Group-IB said in a report shared with The Hacker News. "The number of
ChatGPT ChatGPT ★★
knowbe4.webp 2023-06-20 13:00:00 Cyberheistnews Vol 13 # 25 [empreintes digitales partout] Les informations d'identification volées sont la cause profonde n ° 1 des violations de données
CyberheistNews Vol 13 #25 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches
(lien direct)
CyberheistNews Vol 13 #25 CyberheistNews Vol 13 #25  |   June 20th, 2023 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches Verizon\'s DBIR always has a lot of information to unpack, so I\'ll continue my review by covering how stolen credentials play a role in attacks. This year\'s Data Breach Investigations Report has nearly 1 million incidents in their data set, making it the most statistically relevant set of report data anywhere. So, what does the report say about the most common threat actions that are involved in data breaches? Overall, the use of stolen credentials is the overwhelming leader in data breaches, being involved in nearly 45% of breaches – this is more than double the second-place spot of "Other" (which includes a number of types of threat actions) and ransomware, which sits at around 20% of data breaches. According to Verizon, stolen credentials were the "most popular entry point for breaches." As an example, in Basic Web Application Attacks, the use of stolen credentials was involved in 86% of attacks. The prevalence of credential use should come as no surprise, given the number of attacks that have focused on harvesting online credentials to provide access to both cloud platforms and on-premises networks alike. And it\'s the social engineering attacks (whether via phish, vish, SMiSh, or web) where these credentials are compromised - something that can be significantly diminished by engaging users in security awareness training to familiarize them with common techniques and examples of attacks, so when they come across an attack set on stealing credentials, the user avoids becoming a victim. Blog post with links:https://blog.knowbe4.com/stolen-credentials-top-breach-threat [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters without ever l Ransomware Data Breach Spam Malware Hack Vulnerability Threat Cloud ChatGPT ChatGPT ★★
News.webp 2023-06-20 10:08:13 Plus de 100 000 comptes de chatpt compromis trouvés à vendre sur Dark Web
Over 100,000 compromised ChatGPT accounts found for sale on dark web
(lien direct)
CyberCrooks espérant que les utilisateurs ont chuchoté les secrets de l'employeur à Chatbot Group-ib de la tenue de renseignement sur la menace basée à Singapour a trouvé des informations d'identification ChatGPT dans plus de 100 000 journaux de voleurs négociés sur le Web Dark au cours de la dernière année.…
Cybercrooks hoping users have whispered employer secrets to chatbot Singapore-based threat intelligence outfit Group-IB has found ChatGPT credentials in more than 100,000 stealer logs traded on the dark web in the past year.…
Threat ChatGPT ChatGPT ★★
InfoSecurityMag.webp 2023-06-20 08:30:00 #InfosecurityEurope: NetSkope se déroule pour aider les entreprises à utiliser en toute sécurité le chatppt
#InfosecurityEurope: Netskope Sets Out to Help Enterprises Safely Use ChatGPT
(lien direct)
La nouvelle solution de Netskope \\ vise à permettre aux organisations d'utiliser des outils d'IA génératifs sans exécuter de risques de cybersécurité ou de protection des données
Netskope\'s new solution aims to enable organizations to use generative AI tools without running cybersecurity or data protection risks
ChatGPT ChatGPT ★★
globalsecuritymag.webp 2023-06-20 07:33:56 Netskope sécurise l\'utilisation de ChatGPT et autres applications d\'IA générative (lien direct) Netskope sécurise l'utilisation de ChatGPT et autres applications d'IA générative en entreprise grâce à une solution unique sur le marché Alors que ChatGPT affiche une progression de 25 % par mois au sein des grandes entreprises, les responsables doivent décider s'ils souhaitent bloquer ou activer son utilisation en fonction des enjeux de sécurité et de productivité. - Produits ChatGPT ChatGPT ★★
Netskope.webp 2023-06-20 07:00:00 Activez en toute sécurité Chatgpt et autres applications d'interface utilisateur génératrices en un seul coup!
Safely Enable ChatGPT and Other Generative AI Applications-In One Move!
(lien direct)
> Chez NetSkope, nous avons beaucoup parlé récemment de la façon d'activer le chatppt et d'autres applications généatives d'IA telles que Google Bard et Jasper.Pourquoi?Comme le dit le proverbe, «il n'y a pas de retour.»L'IA générative est là pour rester et aura un effet transformateur sur nos vies quotidiennes, que nous soyons en technologie ou non.[& # 8230;]
>At Netskope, we\'ve talked a lot lately about how to safely enable ChatGPT and other generative AI applications such as Google Bard and Jasper. Why? As the saying goes, “There\'s no going back.” Generative AI is here to stay and will have a transformative effect on our day-to-day lives whether we\'re in technology or not. […]
ChatGPT ChatGPT ★★
bleepingcomputer.webp 2023-06-20 04:00:00 Plus de 100 000 comptes Chatgpt volés via des logiciels malveillants voleurs d'informations
Over 100,000 ChatGPT accounts stolen via info-stealing malware
(lien direct)
Selon Dark Web Marketplace, plus de 101 000 comptes d'utilisateurs de ChatGPT ont été compromis par les voleurs d'informations au cours de la dernière année.[...]
More than 101,000 ChatGPT user accounts have been compromised by information stealers over the past year, according to dark web marketplace data. [...]
Malware ChatGPT ChatGPT ★★
DataSecurityBreach.webp 2023-06-16 08:58:04 Nouvelle technique d\'attaque utilisant OpenAI ChatGPT pour distribuer des packages malveillants (lien direct) Les chercheurs ont récemment découvert une nouvelle technique d'attaque qui exploite les capacités du modèle de langage OpenAI ChatGPT. Cette technique permet aux attaquants de distribuer des packages malveillants dans les environnements de développement. ChatGPT ChatGPT ★★
The_Hackers_News.webp 2023-06-15 17:28:00 Nouvelles recherches: 6% des employés colleront des données sensibles dans les outils Genai comme Chatgpt
New Research: 6% of Employees Paste Sensitive Data into GenAI tools as ChatGPT
(lien direct)
La technologie révolutionnaire des outils Genai, telle que Chatgpt, a apporté des risques importants aux données sensibles aux organisations.Mais que savons-nous vraiment de ce risque?Une nouvelle recherche de la société de sécurité de navigateur Layerx met en lumière la portée et la nature de ces risques.Le rapport intitulé «Révèle le véritable risque d'exposition aux données de Genai» fournit des informations cruciales pour les parties prenantes de la protection des données et
The revolutionary technology of GenAI tools, such as ChatGPT, has brought significant risks to organizations\' sensitive data. But what do we really know about this risk? A new research by Browser Security company LayerX sheds light on the scope and nature of these risks. The report titled "Revealing the True GenAI Data Exposure Risk" provides crucial insights for data protection stakeholders and
Studies ChatGPT ChatGPT ★★★★★
Netskope.webp 2023-06-15 17:00:00 L'utilisation de Chatgpt augmente de façon exponentielle dans l'entreprise
ChatGPT Use is Increasing Exponentially in the Enterprise
(lien direct)
> Chatgpt est un modèle de langue qui génère des réponses courantes et contextuellement pertinentes aux invites de manière conversationnelle.Parce qu'il peut générer du texte fluide en plusieurs langues, il gagne en popularité parmi les utilisateurs d'entreprise qui l'utilisent pour aider avec le processus d'écriture, pour explorer de nouveaux sujets et écrire du code.Chatgpt utilise le graphique [& # 8230;]
>ChatGPT is a language model that generates fluent, contextually relevant responses to prompts in a conversational fashion. Because it can generate fluent text in multiple languages, it is gaining popularity among enterprise users who are using it to help with the writing process, to explore new topics, and to write code. ChatGPT users The graph […]
ChatGPT ChatGPT ★★
silicon.fr.webp 2023-06-15 15:35:21 OpenAI met à jour et révise le prix de ses modèles (lien direct) OpenAI révise le coût et met à jour ses modèles de langage GPT-3.5 Turbo et GPT-4X utilisés par l'agent conversationnel ChatGPT. ChatGPT ★★
Fortinet.webp 2023-06-15 10:57:00 Une étape provisoire vers l'intelligence générale artificielle avec un état d'esprit de sécurité offensant
A Tentative Step Towards Artificial General Intelligence with an Offensive Security Mindset
(lien direct)
L'équipe Fortiguard Labs plonge dans le cœur de Chatgpt et examine le récent boom de GPT-4 et un nouveau projet connu sous le nom d'Autogpt.Autogpt est un projet open source qui essaie d'automatiser le GPT-4.Apprendre encore plus.
The FortiGuard Labs team dives into the heart of ChatGPT and examines the recent boom of GPT-4 and a new project known as AutoGPT. AutoGPT is an open-source project that tries to automate GPT-4. Learn more.
ChatGPT ChatGPT ★★
Netskope.webp 2023-06-14 20:12:49 Ici \\ est ce que la détection de phishing en ligne de Chatgpt et de NetSkope a en commun
Here\\'s What ChatGPT and Netskope\\'s Inline Phishing Detection Have in Common
(lien direct)
Les attaques à phishing sont une cyber-menace majeure qui continue d'évoluer et de devenir plus sophistiquée, provoquant des milliards de dollars de pertes chaque année selon le récent rapport sur la criminalité sur Internet.Cependant, les moteurs traditionnels de détection de phishing hors ligne ou en ligne sont limités dans la façon dont ils peuvent détecter les pages de phishing évasives.En raison des exigences de performance de la ligne [& # 8230;]
Phishing attacks are a major cyber threat that continue to evolve and become more sophisticated, causing billions of dollars in losses each year according to the recent Internet Crime Report. However, traditional offline or inline phishing detection engines are limited in how they can detect evasive phishing pages. Due to the performance requirements of inline […]
Threat ChatGPT ChatGPT ★★
knowbe4.webp 2023-06-13 13:00:00 CyberheistNews Vol 13 # 24 [Le biais de l'esprit \\] le prétexage dépasse désormais le phishing dans les attaques d'ingénierie sociale
CyberheistNews Vol 13 #24 [The Mind\\'s Bias] Pretexting Now Tops Phishing in Social Engineering Attacks
(lien direct)
CyberheistNews Vol 13 #24 CyberheistNews Vol 13 #24  |   June 13th, 2023 [The Mind\'s Bias] Pretexting Now Tops Phishing in Social Engineering Attacks The New Verizon DBIR is a treasure trove of data. As we will cover a bit below, Verizon reported that 74% of data breaches Involve the "Human Element," so people are one of the most common factors contributing to successful data breaches. Let\'s drill down a bit more in the social engineering section. They explained: "Now, who has received an email or a direct message on social media from a friend or family member who desperately needs money? Probably fewer of you. This is social engineering (pretexting specifically) and it takes more skill. "The most convincing social engineers can get into your head and convince you that someone you love is in danger. They use information they have learned about you and your loved ones to trick you into believing the message is truly from someone you know, and they use this invented scenario to play on your emotions and create a sense of urgency. The DBIR Figure 35 shows that Pretexting is now more prevalent than Phishing in Social Engineering incidents. However, when we look at confirmed breaches, Phishing is still on top." A social attack known as BEC, or business email compromise, can be quite intricate. In this type of attack, the perpetrator uses existing email communications and information to deceive the recipient into carrying out a seemingly ordinary task, like changing a vendor\'s bank account details. But what makes this attack dangerous is that the new bank account provided belongs to the attacker. As a result, any payments the recipient makes to that account will simply disappear. BEC Attacks Have Nearly Doubled It can be difficult to spot these attacks as the attackers do a lot of preparation beforehand. They may create a domain doppelganger that looks almost identical to the real one and modify the signature block to show their own number instead of the legitimate vendor. Attackers can make many subtle changes to trick their targets, especially if they are receiving many similar legitimate requests. This could be one reason why BEC attacks have nearly doubled across the DBIR entire incident dataset, as shown in Figure 36, and now make up over 50% of incidents in this category. Financially Motivated External Attackers Double Down on Social Engineering Timely detection and response is crucial when dealing with social engineering attacks, as well as most other attacks. Figure 38 shows a steady increase in the median cost of BECs since 2018, now averaging around $50,000, emphasizing the significance of quick detection. However, unlike the times we live in, this section isn\'t all doom and Spam Malware Vulnerability Threat Patching Uber APT 37 ChatGPT ChatGPT APT 43 ★★
AlienVault.webp 2023-06-13 10:00:00 Rise of IA in Cybercrime: Comment Chatgpt révolutionne les attaques de ransomwares et ce que votre entreprise peut faire
Rise of AI in Cybercrime: How ChatGPT is revolutionizing ransomware attacks and what your business can do
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  OpenAI\'s flagship product, ChatGPT, has dominated the news cycle since its unveiling in November 2022. In only a few months, ChatGPT became the fastest-growing consumer app in internet history, reaching 100 million users as 2023 began. The generative AI application has revolutionized not only the world of artificial intelligence but is impacting almost every industry. In the world of cybersecurity, new tools and technologies are typically adopted quickly; unfortunately, in many cases, bad actors are the earliest to adopt and adapt. This can be bad news for your business, as it escalates the degree of difficulty in managing threats.  Using ChatGPT’s large language model, anyone can easily generate malicious code or craft convincing phishing emails, all without any technical expertise or coding knowledge. While cybersecurity teams can leverage ChatGPT defensively, the lower barrier to entry for launching a cyberattack has both complicated and escalated the threat landscape. Understanding the role of ChatGPT in modern ransomware attacks We’ve written about ransomware many times, but it’s crucial to reiterate that the cost to individuals, businesses, and institutions can be massive, both financially and in terms of data loss or reputational damage. With AI, cybercriminals have a potent tool at their disposal, enabling more precise, adaptable, and stealthy attacks. They\'re using machine learning algorithms to simulate trusted entities, create convincing phishing emails, and even evade detection. The problem isn\'t just the sophistication of the attacks, but their sheer volume. With AI, hackers can launch attacks on an unprecedented scale, exponentially expanding the breadth of potential victims. Today, hackers use AI to power their ransomware attacks, making them more precise, adaptable, and destructive. Cybercriminals can leverage AI for ransomware in many ways, but perhaps the easiest is more in line with how many ChatGPT users are using it: writing and creating content. For hackers, especially foreign ransomware gangs, AI can be used to craft sophisticated phishing emails that are much more difficult to detect than the poorly-worded message that was once so common with bad actors (and their equally bad grammar). Even more concerning, ChatGPT-fueled ransomware can mimic the style and tone of a trusted individual or company, tricking the recipient into clicking a malicious link or downloading an infected attachment. This is where the danger lies. Imagine your organization has the best cybersecurity awareness program, and all your employees have gained expertise in deciphering which emails are legitimate and which can be dangerous. Today, if the email can mimic tone and appear 100% genuine, how are the employees going to know? It’s almost down to a coin flip in terms of odds. Furthermore, AI-driven ransomware can study the behavior of the security software on a system, identify patterns, and then either modify itself or choose th Ransomware Malware Tool Threat ChatGPT ChatGPT ★★
The_State_of_Security.webp 2023-06-13 01:31:37 Chatgpt et confidentialité des données
ChatGPT and Data Privacy
(lien direct)
En avril 2023, l'artiste allemand Boris Eldagsen a remporté le prix de la création ouverte pour son entrée photographique intitulée Pseudomnesia: The Electrican.Mais, la partie déroutante de l'événement pour les juges et le public était qu'il a refusé de recevoir le prix.La raison en était que la photographie a été générée par un outil d'intelligence artificielle (IA).Il a été rapporté qu'Eldagsen "a déclaré qu'il avait utilisé l'image pour tester la compétition et créer une discussion sur l'avenir de la photographie."Était-ce la lacune des juges qu'ils ne pouvaient pas discerner ce qui était réel et ce qui était faux?AI génératif ...
In April 2023, German artist Boris Eldagsen won the open creative award for his photographic entry entitled, Pseudomnesia: The Electrician. But, the confusing part of the event for the judges and the audience was that he refused to receive the award. The reason was that the photograph was generated by an Artificial Intelligence (AI) tool. It was reported that Eldagsen “said he used the picture to test the competition and to create a discussion about the future of photography.” Was it the shortcoming of the judges that they couldn\'t discern what was real and what was fake? Generative AI...
ChatGPT ChatGPT ★★
AlienVault.webp 2023-06-12 10:00:00 Understanding AI risks and how to secure using Zero Trust (lien direct) I. Introduction AI’s transformative power is reshaping business operations across numerous industries. Through Robotic Process Automation (RPA), AI is liberating human resources from the shackles of repetitive, rule-based tasks and directing their focus towards strategic, complex operations. Furthermore, AI and machine learning algorithms can decipher the huge sets of data at an unprecedented speed and accuracy, giving businesses insights that were once out of reach. For customer relations, AI serves as a personal touchpoint, enhancing engagement through personalized interactions. As advantageous as AI is to businesses, it also creates very unique security challenges. For example, adversarial attacks that subtly manipulate the input data of an AI model to make it behave abnormally, all while circumventing detection. Equally concerning is the phenomenon of data poisoning where attackers taint an AI model during its training phase by injecting misleading data, thereby corrupting its eventual outcomes. It is in this landscape that the Zero Trust security model of \'Trust Nothing, Verify Everything\', stakes its claim as a potent counter to AI-based threats. Zero Trust moves away from the traditional notion of a secure perimeter. Instead, it assumes that any device or user, regardless of their location within or outside the network, should be considered a threat. This shift in thinking demands strict access controls, comprehensive visibility, and continuous monitoring across the IT ecosystem. As AI technologies increase operational efficiency and decision-making, they can also become conduits for attacks if not properly secured. Cybercriminals are already trying to exploit AI systems via data poisoning and adversarial attacks making Zero Trust model\'s role in securing these systems is becomes even more important. II. Understanding AI threats Mitigating AI threats risks requires a comprehensive approach to AI security, including careful design and testing of AI models, robust data protection measures, continuous monitoring for suspicious activity, and the use of secure, reliable infrastructure. Businesses need to consider the following risks when implementing AI. Adversarial attacks: These attacks involve manipulating an AI model\'s input data to make the model behave in a way that the attacker desires, without triggering an alarm. For example, an attacker could manipulate a facial recognition system to misidentify an individual, allowing unauthorized access. Data poisoning: This type of attack involves introducing false or misleading data into an AI model during its training phase, with the aim of corrupting the model\'s outcomes. Since AI systems depend heavily on their training data, poisoned data can significantly impact their performance and reliability. Model theft and inversion attacks: Attackers might attempt to steal proprietary AI models or recreate them based on their outputs, a risk that’s particularly high for models provided as a service. Additionally, attackers can try to infer sensitive information from the outputs of an AI model, like learning about the individuals in a training dataset. AI-enhanced cyberattacks: AI can be used by malicious actors to automate and enhance their cyberattacks. This includes using AI to perform more sophisticated phishing attacks, automate the discovery of vulnerabilities, or conduct faster, more effective brute-force attacks. Lack of transparency (black box problem): It\'s often hard to understand how complex AI models make decisions. This lack of transparency can create a security risk as it might allow biased or malicious behavior to go undetected. Dependence on AI systems: As businesses increasingly rely on AI systems, any disruption to these systems can have serious consequences. This could occur due to technical issues, attacks on the AI system itself, or attacks on the underlying infrastructure. III. Th Tool Threat ChatGPT ChatGPT ★★
Last update at: 2024-05-08 22:08:15
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter