www.secnews.physaphae.fr This is the RSS 2.0 feed from www.secnews.physaphae.fr. IT's a simple agragated flow of multiple articles soruces. Liste of sources, can be found on www.secnews.physaphae.fr. 2025-05-10T19:40:23+00:00 www.secnews.physaphae.fr Netskope - etskope est une société de logiciels américaine fournissant une plate-forme de sécurité informatique Maîtriser l'adoption de l'IA avec une sécurité de bout en bout, partout<br>Mastering AI Adoption with End-to-end Security, Everywhere Le rythme de l'innovation dans l'IA générative n'a rien été moins loin de l'explosif. Ce qui a commencé avec les utilisateurs expérimentant des applications publiques comme Chatgpt a rapidement évolué vers une adoption généralisée d'entreprise. Les fonctionnalités de l'IA sont désormais intégrées de manière transparente dans les outils commerciaux de tous les jours, tels que les plateformes de service client comme Hidlyly, des logiciels RH comme le réseau et même les médias sociaux […]
>The pace of innovation in generative AI has been nothing short of explosive. What began with users experimenting with public apps like ChatGPT has rapidly evolved into widespread enterprise adoption. AI features are now seamlessly embedded into everyday business tools, such as customer service platforms like Gladly, HR software like Lattice, and even social media […] ]]>
2025-04-28T11:40:00+00:00 https://www.netskope.com/blog/mastering-ai-adoption-with-end-to-end-security-everywhere www.secnews.physaphae.fr/article.php?IdArticle=8668945 False Tool ChatGPT 2.0000000000000000
The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) [Webinaire] L'IA est déjà à l'intérieur de votre pile SaaS - Apprenez à empêcher la prochaine brèche silencieuse<br>[Webinar] AI Is Already Inside Your SaaS Stack - Learn How to Prevent the Next Silent Breach Your employees didn\'t mean to expose sensitive data. They just wanted to move faster. So they used ChatGPT to summarize a deal. Uploaded a spreadsheet to an AI-enhanced tool. Integrated a chatbot into Salesforce. No big deal-until it is. If this sounds familiar, you\'re not alone. Most security teams are already behind in detecting how AI tools are quietly reshaping their SaaS environments. And]]> 2025-04-18T15:15:00+00:00 https://thehackernews.com/2025/04/webinar-ai-is-already-inside-your-saas.html www.secnews.physaphae.fr/article.php?IdArticle=8664414 False Tool,Cloud ChatGPT 3.0000000000000000 GB Hacker - Blog de reverseur Générateur d'images Chatgpt abusé de faux passeport<br>ChatGPT Image Generator Abused for Fake Passport Production Le générateur d'images ChatGpt d'Openai a été exploité pour créer des faux passeports convaincants en quelques minutes, mettant en évidence une vulnérabilité importante dansSystèmes de vérification d'identité actuels. Cette révélation provient du rapport de menace CTRL de Cato 2025, qui souligne la démocratisation de la cybercriminalité à travers l'avènement des outils génératifs de l'IA (Genai) comme Chatgpt. Historiquement, la création de faux […]
>OpenAI’s ChatGPT image generator has been exploited to create convincing fake passports in mere minutes, highlighting a significant vulnerability in current identity verification systems. This revelation comes from the 2025 Cato CTRL Threat Report, which underscores the democratization of cybercrime through the advent of generative AI (GenAI) tools like ChatGPT. Historically, the creation of fake […] ]]>
2025-04-15T12:08:47+00:00 https://gbhackers.com/chatgpt-image-generator-abused/ www.secnews.physaphae.fr/article.php?IdArticle=8663085 False Tool,Vulnerability,Threat ChatGPT 3.0000000000000000
We Live Security - Editeur Logiciel Antivirus ESET Ce mois-ci en sécurité avec Tony Anscombe - édition de mars 2025<br>This month in security with Tony Anscombe – March 2025 edition From an exploited vulnerability in a third-party ChatGPT tool to a bizarre twist on ransomware demands, it\'s a wrap on another month filled with impactful cybersecurity news]]> 2025-03-31T10:46:09+00:00 https://www.welivesecurity.com/en/videos/month-security-tony-anscombe-march-2025-edition/ www.secnews.physaphae.fr/article.php?IdArticle=8661296 False Ransomware,Tool,Vulnerability ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms La surface d'attaque en expansion: comment un attaquant déterminé prospère dans le lieu de travail numérique en évolution d'aujourd'hui<br>The Expanding Attack Surface: How One Determined Attacker Thrives in Today\\'s Evolving Digital Workplace 2025-03-31T01:31:28+00:00 https://www.proofpoint.com/us/blog/email-and-cloud-threats/relentless-cybercriminal-workplace-attacks www.secnews.physaphae.fr/article.php?IdArticle=8659510 False Tool,Threat,Cloud ChatGPT 3.0000000000000000 Checkpoint - Fabricant Materiel Securite Transformer la gestion de la sécurité avec des agents et assistants de l'IA<br>Transforming Security Management with AI Agents and Assistants Les attaquants utilisent déjà l'IA, mais vous pouvez retourner le feu en déployant vos propres outils de cybersécurité alimentés par l'IA. Se tourner vers General Use LLMS comme Chatgpt ou Deepseek n'est pas une option pour la gestion de la sécurité car ils ne sont pas spécialisés pour la sécurité du réseau et les risques d'exposition des données sensibles. Mais les assistants du Genai de qualité en entreprise et les agents de l'IA ont le potentiel de fournir tous les avantages du Genai pour vous aider à rester en avance sur les attaques alimentées par l'IA, sans exposer votre organisation aux risques inhérents à l'utilisation d'outils Genai à usage général. Les avantages des assistants Genai comprennent la rationalisation des opérations, l'économie d'économie de temps et de coûts, […]
>Attackers are already using AI, but you can return fire by deploying your own AI-powered cyber security tools. Turning to general use LLMs like ChatGPT or DeepSeek is not an option for security management as they are not specialized for network security and risk exposing sensitive data. But enterprise-grade, purpose built GenAI assistants and AI agents have the potential to provide all the benefits of GenAI to help you stay ahead of AI-powered attacks, without exposing your organization to the inherent risks of using general purpose GenAI tools. The benefits of GenAI assistants include streamlining operations, saving time and costs, […] ]]>
2025-03-26T13:00:10+00:00 https://blog.checkpoint.com/transforming-security-management-with-ai-agents-and-assistants/ www.secnews.physaphae.fr/article.php?IdArticle=8658317 False Tool ChatGPT 3.0000000000000000
Korben - Bloger francais PocketPal AI, l\'assistant IA 100% local sur Android / iOS PocketPal AI, on va tous pouvoir discuter avec une IA directement depuis votre smartphone, 100% en local !]]> 2025-03-07T09:00:00+00:00 https://korben.info/pocketpal-ai-assistant-ia-local-smartphone.html www.secnews.physaphae.fr/article.php?IdArticle=8654482 False Tool,Mobile ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms Arrêt de cybersécurité du mois: Capital One Credential Phishing-How Les cybercriminels ciblent votre sécurité financière<br>Cybersecurity Stop of the Month: Capital One Credential Phishing-How Cybercriminals Are Targeting Your Financial Security 2025-02-25T02:00:04+00:00 https://www.proofpoint.com/us/blog/email-and-cloud-threats/capital-one-phishing-email-campaign www.secnews.physaphae.fr/article.php?IdArticle=8651011 False Malware,Tool,Threat,Prediction,Medical,Cloud,Commercial ChatGPT 3.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Openai interdit les comptes abusant le chatppt pour les campagnes de surveillance et d'influence<br>OpenAI Bans Accounts Misusing ChatGPT for Surveillance and Influence Campaigns OpenAI on Friday revealed that it banned a set of accounts that used its ChatGPT tool to develop a suspected artificial intelligence (AI)-powered surveillance tool. The social media listening tool is said to likely originate from China and is powered by one of Meta\'s Llama models, with the accounts in question using the AI company\'s models to generate detailed descriptions and analyze documents]]> 2025-02-22T10:47:00+00:00 https://thehackernews.com/2025/02/openai-bans-accounts-misusing-chatgpt.html www.secnews.physaphae.fr/article.php?IdArticle=8649811 False Tool ChatGPT 3.0000000000000000 Cyble - CyberSecurity Firm Les allégations de fuite omnigpt montrent le risque d'utiliser des données sensibles sur les chatbots d'IA<br>OmniGPT Leak Claims Show Risk of Using Sensitive Data on AI Chatbots Les allégations récentes des acteurs de la menace selon lesquelles ils ont obtenu une base de données Omnigpt Backend montrent les risques d'utilisation de données sensibles sur les plates-formes de chatbot AI, où les entrées de données pourraient potentiellement être révélées à d'autres utilisateurs ou exposées dans une violation.  Omnigpt n'a pas encore répondu aux affirmations, qui ont été faites par des acteurs de menace sur le site de fuite de BreachForums, mais les chercheurs sur le Web de Cyble Dark ont ​​analysé les données exposées.  Les chercheurs de Cyble ont détecté des données potentiellement sensibles et critiques dans les fichiers, allant des informations personnellement identifiables (PII) aux informations financières, aux informations d'accès, aux jetons et aux clés d'API. Les chercheurs n'ont pas tenté de valider les informations d'identification mais ont basé leur analyse sur la gravité potentielle de la fuite si les revendications tas \\ 'sont confirmées comme étant valides.   omnigpt hacker affirme Omnigpt intègre plusieurs modèles de grande langue (LLM) bien connus dans une seule plate-forme, notamment Google Gemini, Chatgpt, Claude Sonnet, Perplexity, Deepseek et Dall-E, ce qui en fait une plate-forme pratique pour accéder à une gamme d'outils LLM.   le Acteurs de menace (TAS), qui a posté sous les alias qui comprenait des effets de synthéticotions plus sombres et, a affirmé que les données "contient tous les messages entre les utilisateurs et le chatbot de ce site ainsi que tous les liens vers les fichiers téléchargés par les utilisateurs et également les e-mails utilisateur de 30 000. Vous pouvez trouver de nombreuses informations utiles dans les messages tels que les clés API et les informations d'identification et bon nombre des fich]]> 2025-02-21T13:59:15+00:00 https://cyble.com/blog/omnigpt-leak-risk-ai-data/ www.secnews.physaphae.fr/article.php?IdArticle=8649585 False Spam,Tool,Vulnerability,Threat ChatGPT 3.0000000000000000 Cyber Skills - Podcast Cyber The Growing Threat of Phishing Attacks and How to Protect Yourself Phishing remains the most common type of cybercrime, evolving into a sophisticated threat that preys on human psychology and advanced technology. Traditional phishing involves attackers sending fake, malicious links disguised as legitimate messages to trick victims into revealing sensitive information or installing malware. However, phishing attacks have become increasingly advanced, introducing what experts call "phishing 2.0" and psychological phishing.  Phishing 2.0 leverages AI to analyse publicly available data, such as social media profiles and public records, to craft highly personalized and convincing messages. These tailored attacks significantly increase the likelihood of success. Psychological manipulation also plays a role in phishing schemes. Attackers exploit emotions like fear and trust, often creating a sense of urgency to pressure victims into acting impulsively. By impersonating trusted entities, such as banks or employers, they pressure victims into following instructions without hesitation.  AI has further amplified the efficiency and scale of phishing attacks. Cybercriminals use AI tools to generate convincing scam messages rapidly, launch automated campaigns and target thousands of individuals within minutes. Tools like ChatGPT, when misused in “DAN mode”, can bypass ethical restrictions to craft grammatically correct and compelling messages, aiding attackers who lack English fluency.  ]]> 2025-02-17T00:00:00+00:00 https://www.cyberskills.ie/explore/news/the-growing-threat-of-phishing-attacks-and-how-to-protect-yourself--.html www.secnews.physaphae.fr/article.php?IdArticle=8648755 False Malware,Tool,Vulnerability,Threat ChatGPT 3.0000000000000000 Checkpoint - Fabricant Materiel Securite A Safer Digital Future: Stopping AI-Fueled Cyber Scams for a more Secure Tomorrow With Safer Internet Day this week, it\'s hard not to feel a little extra concern about our kids\' online safety. Today, our children and young adults are living and breathing a digital world that\'s evolving faster than ever-one where scammers are now using AI-assisted smart tools like ChatGPT and DeepSeek to create malicious content that can trick even the savviest among us. To protect these young minds, some governments have taken bold steps. In Singapore and Australia, restrictions or complete bans to prevent young children under 16 years old from using the popular social media platform, Instagram. These measures recognize […]
>With Safer Internet Day this week, it\'s hard not to feel a little extra concern about our kids\' online safety. Today, our children and young adults are living and breathing a digital world that\'s evolving faster than ever-one where scammers are now using AI-assisted smart tools like ChatGPT and DeepSeek to create malicious content that can trick even the savviest among us. To protect these young minds, some governments have taken bold steps. In Singapore and Australia, restrictions or complete bans to prevent young children under 16 years old from using the popular social media platform, Instagram. These measures recognize […] ]]>
2025-02-12T21:43:22+00:00 https://blog.checkpoint.com/security/a-safer-digital-future-stopping-ai-fueled-cyber-scams-for-a-more-secure-tomorrow/ www.secnews.physaphae.fr/article.php?IdArticle=8648275 False Tool ChatGPT 3.0000000000000000
ProofPoint - Cyber Firms GenAI Tools Were Putting a Retailer\\'s Data at Risk-Here\\'s How Proofpoint Helped 2025-02-10T05:55:21+00:00 https://www.proofpoint.com/us/blog/information-protection/retailer-used-proofpoint-to-securely-adopt-genai www.secnews.physaphae.fr/article.php?IdArticle=8647971 False Tool ChatGPT 3.0000000000000000 Checkpoint - Fabricant Materiel Securite Protect Your Organization from GenAI Risks with Harmony SASE Love it or hate it, large language models (LLMs) like ChatGPT and other AI tools are reshaping the modern workplace. As AI becomes a critical part of daily work, establishing guardrails and deploying monitoring tools for these tools is critical. That\'s where Check Point\'s Harmony SASE comes in. We\'ve already talked about Browser Security and the clipboard control feature to help define what types of information can\'t be shared with LLMs. For monitoring these services, our GenAI Security service shows exactly which AI tools your team is using, who is using them, what kind of information they\'re sharing with the […]
>Love it or hate it, large language models (LLMs) like ChatGPT and other AI tools are reshaping the modern workplace. As AI becomes a critical part of daily work, establishing guardrails and deploying monitoring tools for these tools is critical. That\'s where Check Point\'s Harmony SASE comes in. We\'ve already talked about Browser Security and the clipboard control feature to help define what types of information can\'t be shared with LLMs. For monitoring these services, our GenAI Security service shows exactly which AI tools your team is using, who is using them, what kind of information they\'re sharing with the […] ]]>
2025-02-03T13:00:01+00:00 https://blog.checkpoint.com/security/protect-your-organization-from-genai-risks-with-harmony-sase/ www.secnews.physaphae.fr/article.php?IdArticle=8646503 False Tool ChatGPT 3.0000000000000000
ProofPoint - Cyber Firms DeepSeek AI: Safeguarding Your Sensitive and Valuable Data with Proofpoint 2025-01-31T08:35:30+00:00 https://www.proofpoint.com/us/blog/information-protection/deepseek-ai-safeguarding-your-sensitive-and-valuable-data-proofpoint www.secnews.physaphae.fr/article.php?IdArticle=8645348 False Tool,Cloud ChatGPT 2.0000000000000000 ProofPoint - Cyber Firms Unlocking the Value of AI: Safe AI Adoption for Cybersecurity Professionals 2025-01-24T05:28:30+00:00 https://www.proofpoint.com/us/blog/information-protection/value-of-safe-ai-adoption-insights-for-cybersecurity-professionals www.secnews.physaphae.fr/article.php?IdArticle=8642164 False Malware,Tool,Vulnerability,Threat,Legislation ChatGPT 2.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Product Review: How Reco Discovers Shadow AI in SaaS As SaaS providers race to integrate AI into their product offerings to stay competitive and relevant, a new challenge has emerged in the world of AI: shadow AI.  Shadow AI refers to the unauthorized use of AI tools and copilots at organizations. For example, a developer using ChatGPT to assist with writing code, a salesperson downloading an AI-powered meeting transcription tool, or a]]> 2025-01-09T17:25:00+00:00 https://thehackernews.com/2025/01/product-review-how-reco-discovers.html www.secnews.physaphae.fr/article.php?IdArticle=8635275 False Tool,Cloud ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms Security Brief: ClickFix Social Engineering Technique Floods Threat Landscape 2024-11-18T10:34:05+00:00 https://www.proofpoint.com/us/blog/threat-insight/security-brief-clickfix-social-engineering-technique-floods-threat-landscape www.secnews.physaphae.fr/article.php?IdArticle=8613359 False Malware,Tool,Threat ChatGPT 2.0000000000000000 ProofPoint - Cyber Firms Comment la protection de l'information ProofPoint offre une valeur aux clients<br>How Proofpoint Information Protection Provides Value for Customers 2024-10-30T14:01:17+00:00 https://www.proofpoint.com/us/blog/information-protection/comparing-proofpoint-dlp-with-microsoft-purview www.secnews.physaphae.fr/article.php?IdArticle=8603944 False Tool,Threat,Cloud ChatGPT 2.0000000000000000 RiskIQ - cyber risk firms (now microsoft) Bumblebee malware returns after recent law enforcement disruption 2024-10-21T18:57:24+00:00 https://community.riskiq.com/article/b382c0b6 www.secnews.physaphae.fr/article.php?IdArticle=8601156 False Ransomware,Spam,Malware,Tool,Threat,Legislation ChatGPT 2.0000000000000000 RiskIQ - cyber risk firms (now microsoft) Une mise à jour sur la perturbation des utilisations trompeuses de l'IA<br>An update on disrupting deceptive uses of AI 2024-10-16T19:15:03+00:00 https://community.riskiq.com/article/e46070dd www.secnews.physaphae.fr/article.php?IdArticle=8598902 False Malware,Tool,Vulnerability,Threat,Studies ChatGPT 2.0000000000000000 CyberScoop - scoopnewsgroup.com special Cyber Openai dit qu'il a perturbé plus de 20 réseaux d'influence étrangère au cours de l'année écoulée<br>OpenAI says it has disrupted 20-plus foreign influence networks in past year Les acteurs de la menace ont été observés à l'aide de Chatgpt et d'autres outils pour étendre les surfaces d'attaque, déboguer les logiciels malveillants et créer du contenu de spectre.
>Threat actors were observed using ChatGPT and other tools to scope out attack surfaces, debug malware and create spearphishing content. ]]>
2024-10-09T19:25:56+00:00 https://cyberscoop.com/openai-threat-report-foreign-influence-generative-ai/ www.secnews.physaphae.fr/article.php?IdArticle=8595004 False Malware,Tool ChatGPT 3.0000000000000000
Korben - Bloger francais AISheeter - La puissance de l\'IA au service des esclaves de Google Sheets Google Workspace Marketplace. Une fois installé, rendez-vous dans le menu des extensions de Sheets, puis, dans les paramètres de celle-ci pour ajouter vos clés, ChatGPT, Claude, Gemini et Groq.]]> 2024-10-09T14:51:12+00:00 https://korben.info/google-sheet-chatgpt.html www.secnews.physaphae.fr/article.php?IdArticle=8594845 False Tool ChatGPT 2.0000000000000000 RiskIQ - cyber risk firms (now microsoft) Faits saillants hebdomadaires OSINT, 30 septembre 2024<br>Weekly OSINT Highlights, 30 September 2024 2024-09-30T13:21:55+00:00 https://community.riskiq.com/article/70e8b264 www.secnews.physaphae.fr/article.php?IdArticle=8588927 False Ransomware,Malware,Tool,Vulnerability,Threat,Patching,Mobile ChatGPT,APT 36 2.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Chatgpt macOS Flaw pourrait avoir activé des logiciels espions à long terme via la fonction de mémoire<br>ChatGPT macOS Flaw Could\\'ve Enabled Long-Term Spyware via Memory Function A now-patched security vulnerability in OpenAI\'s ChatGPT app for macOS could have made it possible for attackers to plant long-term persistent spyware into the artificial intelligence (AI) tool\'s memory. The technique, dubbed SpAIware, could be abused to facilitate "continuous data exfiltration of any information the user typed or responses received by ChatGPT, including any future chat sessions]]> 2024-09-25T15:01:00+00:00 https://thehackernews.com/2024/09/chatgpt-macos-flaw-couldve-enabled-long.html www.secnews.physaphae.fr/article.php?IdArticle=8584616 False Tool,Vulnerability ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms AI générative: Comment les organisations peuvent-elles monter sur la vague Genai en toute sécurité et contenir des menaces d'initiés?<br>Generative AI: How Can Organizations Ride the GenAI Wave Safely and Contain Insider Threats? 2024-09-24T08:14:13+00:00 https://www.proofpoint.com/us/blog/information-protection/riding-genai-wave-safely-containing-insider-threats www.secnews.physaphae.fr/article.php?IdArticle=8583819 False Tool,Vulnerability,Threat,Prediction,Cloud,Technical ChatGPT 2.0000000000000000 ProofPoint - Cyber Firms Les solutions de sécurité centrées sur l'human<br>Proofpoint\\'s Human-Centric Security Solutions Named SC Awards 2024 Finalist in Four Unique Categories 2024-08-30T07:00:00+00:00 https://www.proofpoint.com/us/blog/corporate-news/proofpoint-named-sc-awards-2024-finalist www.secnews.physaphae.fr/article.php?IdArticle=8566942 False Ransomware,Tool,Vulnerability,Threat,Cloud,Conference ChatGPT 2.0000000000000000 The State of Security - Magazine Américain Avance rapide ou chute libre?Naviguer dans la montée de l'IA dans la cybersécurité<br>Fast Forward or Freefall? Navigating the Rise of AI in Cybersecurity It has been only one year and nine months since OpenAI made ChatGPT available to the public, and it has already had a massive impact on our lives. While AI will undoubtedly reshape our world, the exact nature of this revolution is still unfolding. With little to no experience, security administrators can use ChatGPT to rapidly create Powershell scripts. Tools like Grammarly or Jarvis can turn average writers into confident editors. Some people have even begun using AI as an alternative to traditional search engines like Google and Bing. The applications of AI are endless! Generative AI in...]]> 2024-08-19T03:31:39+00:00 https://www.tripwire.com/state-of-security/fast-forward-or-freefall-navigating-rise-ai-cybersecurity www.secnews.physaphae.fr/article.php?IdArticle=8560176 False Tool ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms Arrêt de cybersécurité du mois: attaque de phishing d'identification ciblant les données de localisation des utilisateurs<br>Cybersecurity Stop of the Month: Credential Phishing Attack Targeting User Location Data 2024-08-14T07:19:53+00:00 https://www.proofpoint.com/us/blog/email-and-cloud-threats/stopping-phish-attacks-behind-qr-codes-human-verification-challenge www.secnews.physaphae.fr/article.php?IdArticle=8557648 False Malware,Tool,Threat,Cloud ChatGPT 3.0000000000000000 InfoSecurity Mag - InfoSecurity Magazine Le leadership Openai s'est séparé de la technologie des filigations de l'IA interne<br>OpenAI Leadership Split Over In-House AI Watermarking Technology One primary concern is that the tool might turn ChatGPT users away from the product]]> 2024-08-09T10:15:00+00:00 https://www.infosecurity-magazine.com/news/openai-split-ai-watermarking/ www.secnews.physaphae.fr/article.php?IdArticle=8554609 False Tool ChatGPT 2.0000000000000000 Checkpoint - Fabricant Materiel Securite Déverrouillez la puissance de Genai avec Check Point Software Technologies<br>Unlock the Power of GenAI with Check Point Software Technologies La révolution de Genai est déjà ici des applications génératrices d'IA comme Chatgpt et Gemini sont là pour rester.Mais comme ils facilitent la vie des utilisateurs beaucoup plus simples, ils rendent également la vie de votre organisation beaucoup plus difficile.Bien que certaines organisations aient carrément interdit les applications Genai, selon un point de contrôle et l'étude Vason Bourne, 92% des organisations permettent à leurs employés d'utiliser des outils Genai, mais sont préoccupés par la sécurité et la fuite de données.En fait, une estimation indique que 55% des événements de fuite de données sont le résultat direct de l'utilisation du Genai.Comme des tâches comme le code de débogage et le texte de raffinage peuvent désormais être achevés en fraction [& # 8230;]
>The GenAI Revolution is Already Here Generative AI applications like ChatGPT and Gemini are here to stay. But as they make users\' lives much simpler, they also make your organization\'s life much harder. While some organizations have outright banned GenAI applications, according to a Check Point and Vason Bourne study, 92% of organizations allow their employees to use GenAI tools yet are concerned about security and data leakage. In fact, one estimate says 55% of data leakage events are a direct result of GenAI usage. As tasks like debugging code and refining text can now be completed in a fraction […] ]]>
2024-08-07T13:00:47+00:00 https://blog.checkpoint.com/artificial-intelligence/unlock-the-power-of-genai-with-check-point-software-technologies/ www.secnews.physaphae.fr/article.php?IdArticle=8553399 False Tool,Studies ChatGPT 3.0000000000000000
We Live Security - Editeur Logiciel Antivirus ESET Méfiez-vous des faux outils d'IA masquant des menaces de logiciels malveillants très réels<br>Beware of fake AI tools masking very real malware threats Ever attuned to the latest trends, cybercriminals distribute malicious tools that pose as ChatGPT, Midjourney and other generative AI assistants]]> 2024-07-29T09:00:00+00:00 https://www.welivesecurity.com/en/cybersecurity/beware-fake-ai-tools-masking-very-real-malware-threat/ www.secnews.physaphae.fr/article.php?IdArticle=8547090 False Malware,Tool ChatGPT 2.0000000000000000 RiskIQ - cyber risk firms (now microsoft) Les attaques d'escroquerie profitent de la popularité de la vague de l'IA générative<br>Scam Attacks Taking Advantage of the Popularity of the Generative AI Wave 2024-07-26T19:24:17+00:00 https://community.riskiq.com/article/2826e7d7 www.secnews.physaphae.fr/article.php?IdArticle=8544990 True Ransomware,Spam,Malware,Tool,Threat,Studies ChatGPT 3.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Ce que les prestataires de soins de santé devraient faire après une violation de données médicales<br>What Healthcare Providers Should Do After A Medical Data Breach 2023 Cost of a Data Breach report reveals. But data breaches aren’t just expensive, they also harm patient privacy, damage organizational reputation, and erode patient trust in healthcare providers. As data breaches are now largely a matter of “when” not “if”, it’s important to devise a solid data breach response plan. By acting fast to prevent further damage and data loss, you can restore operations as quickly as possible with minimal harm done. Contain the Breach Once a breach has been detected, you need to act fast to contain it, so it doesn’t spread. That means disconnecting the affected system from the network, but not turning it off altogether as your forensic team still needs to investigate the situation. Simply unplug the network cable from the router to disconnect it from the internet. If your antivirus scanner has found malware or a virus on the system, quarantine it, so it can be analyzed later. Keep the firewall settings as they are and save all firewall and security logs. You can also take screenshots if needed. It’s also smart to change all access control login details. Strong complex passwords are a basic cybersecurity feature difficult for hackers and software to crack. It’s still important to record old passwords for future investigation. Also, remember to deactivate less-important accounts. Document the Breach You then need to document the breach, so forensic investigators can find out what caused it, as well as recommend accurate next steps to secure the network now and prevent future breaches. So, in your report, explain how you came to hear of the breach and relay exactly what was stated in the notification (including the date and time you were notified). Also, document every step you took in response to the breach. This includes the date and time you disconnected systems from the network and changed account credentials and passwords. If you use artificial intelligence (AI) tools, you’ll also need to consider whether they played a role in the breach, and document this if so. For example, ChatGPT, a popular chatbot and virtual assistant, can successfully exploit zero-day security vulnerabilities 87% of the time, a recent study by researchers at the University of Illinois Urbana-Champaign found. Although AI is increasingly used in healthcare to automate tasks, manage patient data, and even make tailored care recommendations, it does pose a serious risk to patient data integrity despite the other benefits it provides. So, assess whether AI influenced your breach at all, so your organization can make changes as needed to better prevent data breaches in the future. Report the Breach Although your first instinct may be to keep the breach under wraps, you’re actually legally required to report it. Under the ]]> 2024-07-23T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/what-healthcare-providers-should-do-after-a-medical-data-breach www.secnews.physaphae.fr/article.php?IdArticle=8542852 False Data Breach,Malware,Tool,Vulnerability,Threat,Studies,Medical ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms ProofPoint ouvre la voie à la protection des personnes et à la défense des données avec un quartier central<br>Proofpoint Leads the Way in Protecting People and Defending Data with a Pivotal Quarter 2024-07-23T06:00:00+00:00 https://www.proofpoint.com/us/blog/corporate-news/proofpoint-leads-way-protecting-people-and-defending-data-pivotal-q2-2024 www.secnews.physaphae.fr/article.php?IdArticle=8542711 False Tool,Threat,Conference ChatGPT 3.0000000000000000 Security Intelligence - Site de news Américain Chatgpt 4 peut exploiter 87% des vulnérabilités d'une journée<br>ChatGPT 4 can exploit 87% of one-day vulnerabilities Depuis l'utilisation généralisée et croissante de Chatgpt et d'autres modèles de grande langue (LLM) ces dernières années, la cybersécurité a été une préoccupation majeure.Parmi les nombreuses questions, les professionnels de la cybersécurité se sont demandé à quel point ces outils ont été efficaces pour lancer une attaque.Les chercheurs en cybersécurité Richard Fang, Rohan Bindu, Akul Gupta et Daniel Kang ont récemment réalisé une étude à [& # 8230;]
>Since the widespread and growing use of ChatGPT and other large language models (LLMs) in recent years, cybersecurity has been a top concern. Among the many questions, cybersecurity professionals wondered how effective these tools were in launching an attack. Cybersecurity researchers Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang recently performed a study to […] ]]>
2024-07-01T13:00:00+00:00 https://securityintelligence.com/articles/chatgpt-4-exploits-87-percent-one-day-vulnerabilities/ www.secnews.physaphae.fr/article.php?IdArticle=8529232 False Tool,Vulnerability,Threat,Studies ChatGPT 3.0000000000000000
McAfee Labs - Editeur Logiciel Qualité sur la quantité: la clé Genai contre-intuitive<br>Quality Over Quantity: the Counter-Intuitive GenAI Key Cela est de près de deux ans depuis le lancement d'Openai, ce qui entraîne une sensibilisation et un accès accrus à la sensibilisation et à l'accès à des outils d'IA génératifs ....
> It’s been almost two years since OpenAI launched ChatGPT, driving increased mainstream awareness of and access to Generative AI tools.... ]]>
2024-06-28T17:51:58+00:00 https://www.mcafee.com/blogs/other-blogs/mcafee-labs/quality-over-quantity-the-counter-intuitive-genai-key/ www.secnews.physaphae.fr/article.php?IdArticle=8527342 False Tool ChatGPT 3.0000000000000000
ZD Net - Magazine Info Les détecteurs de l'IA peuvent-ils nous sauver de Chatgpt?J'ai essayé 6 outils en ligne pour découvrir<br>Can AI detectors save us from ChatGPT? I tried 6 online tools to find out With the sudden arrival of ChatGPT, educators and editors face a worrying surge of automated content submissions. We look at the problem and what can be done about it.]]> 2024-06-21T09:13:00+00:00 https://www.zdnet.com/article/can-ai-detectors-save-us-from-chatgpt-i-tried-5-online-tools-to-find-out/#ftag=RSSbaffb68 www.secnews.physaphae.fr/article.php?IdArticle=8522600 False Tool ChatGPT 2.0000000000000000 ZD Net - Magazine Info Conseils de confidentialité de Chatgpt: deux façons importantes de limiter les données que vous partagez avec OpenAI<br>ChatGPT privacy tips: Two important ways to limit the data you share with OpenAI Want to use AI tools without compromising control of your data? Here are two ways to safeguard your privacy in OpenAI\'s chatbot.]]> 2024-06-06T20:51:21+00:00 https://www.zdnet.com/article/chatgpt-privacy-tips-two-important-ways-to-limit-the-data-you-share-with-openai/#ftag=RSSbaffb68 www.secnews.physaphae.fr/article.php?IdArticle=8514312 False Tool ChatGPT 2.0000000000000000 The State of Security - Magazine Américain Comment les criminels tirent parti de l'IA pour créer des escroqueries convaincantes<br>How Criminals Are Leveraging AI to Create Convincing Scams Generative AI tools like ChatGPT and Google Bard are some of the most exciting technologies in the world. They have already begun to revolutionize productivity, supercharge creativity, and make the world a better place. But as with any new technology, generative AI has brought about new risks-or, rather, made old risks worse. Aside from the much-discussed potential " AI apocalypse" that has dominated headlines in recent months, generative AI has a more immediate negative impact: creating convincing phishing scams. Cybercriminals create far more sophisticated scams with generative AI than...]]> 2024-05-27T03:30:55+00:00 https://www.tripwire.com/state-of-security/how-criminals-are-leveraging-ai-create-convincing-scams www.secnews.physaphae.fr/article.php?IdArticle=8507703 False Tool ChatGPT 2.0000000000000000 ProofPoint - Cyber Firms Arrêt de cybersécurité du mois: les attaques d'identité qui ciblent la chaîne d'approvisionnement<br>Cybersecurity Stop of the Month: Impersonation Attacks that Target the Supply Chain 2024-05-14T06:00:46+00:00 https://www.proofpoint.com/us/blog/email-and-cloud-threats/impersonation-attacks-target-supply-chain www.secnews.physaphae.fr/article.php?IdArticle=8499611 False Ransomware,Data Breach,Tool,Vulnerability,Threat ChatGPT 2.0000000000000000 ProofPoint - Cyber Firms Genai alimente la dernière vague des menaces de messagerie modernes<br>GenAI Is Powering the Latest Surge in Modern Email Threats 2024-05-06T07:54:03+00:00 https://www.proofpoint.com/us/blog/email-and-cloud-threats/genai-powering-latest-surge-modern-email-threats www.secnews.physaphae.fr/article.php?IdArticle=8494488 False Ransomware,Data Breach,Tool,Vulnerability,Threat ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms Quelle est la meilleure façon d'arrêter la perte de données Genai?Adopter une approche centrée sur l'homme<br>What\\'s the Best Way to Stop GenAI Data Loss? Take a Human-Centric Approach 2024-05-01T05:12:14+00:00 https://www.proofpoint.com/us/blog/information-protection/whats-best-way-stop-genai-data-loss-take-human-centric-approach www.secnews.physaphae.fr/article.php?IdArticle=8491708 False Tool,Medical,Cloud ChatGPT 3.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Genai: un nouveau mal de tête pour les équipes de sécurité SaaS<br>GenAI: A New Headache for SaaS Security Teams The introduction of Open AI\'s ChatGPT was a defining moment for the software industry, touching off a GenAI race with its November 2022 release. SaaS vendors are now rushing to upgrade tools with enhanced productivity capabilities that are driven by generative AI. Among a wide range of uses, GenAI tools make it easier for developers to build software, assist sales teams in mundane email writing,]]> 2024-04-17T16:37:00+00:00 https://thehackernews.com/2024/04/genai-new-headache-for-saas-security.html www.secnews.physaphae.fr/article.php?IdArticle=8484090 False Tool,Cloud ChatGPT 2.0000000000000000 Korben - Bloger francais Les IA comme ChatGPT aident-elles réellement les étudiants en informatique ? 2024-04-15T10:13:05+00:00 https://korben.info/apprendre-a-coder-avec-ia-etude-generateurs-code-novice.html www.secnews.physaphae.fr/article.php?IdArticle=8482646 False Tool ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms Mémoire de sécurité: TA547 cible les organisations allemandes avec Rhadamanthys Stealer<br>Security Brief: TA547 Targets German Organizations with Rhadamanthys Stealer 2024-04-10T10:12:47+00:00 https://www.proofpoint.com/us/blog/threat-insight/security-brief-ta547-targets-german-organizations-rhadamanthys-stealer www.secnews.physaphae.fr/article.php?IdArticle=8479187 False Malware,Tool,Threat ChatGPT 2.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Les risques de sécurité du chat Microsoft Bing AI pour le moment<br>The Security Risks of Microsoft Bing AI Chat at this Time 2024-04-10T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/microsoft-bing-ai-chat-is-a-bigger-security-issue-than-it-seems www.secnews.physaphae.fr/article.php?IdArticle=8479215 False Ransomware,Tool,Vulnerability ChatGPT 2.0000000000000000 Recorded Future - FLux Recorded Future Les cybercriminels répartissent les logiciels malveillants à travers les pages Facebook imitant les marques d'IA<br>Cybercriminals are spreading malware through Facebook pages impersonating AI brands Les cybercriminels prennent le contrôle des pages Facebook et les utilisent pour annoncer de faux logiciels d'intelligence artificielle générative chargés de logiciels malveillants. & Nbsp;Selon des chercheurs de la société de cybersécurité Bitdefender, les CyberCrooks profitent de la popularité des nouveaux outils génératifs d'IA et utilisent «malvertising» pour usurper l'identité de produits légitimes comme MidJourney, Sora AI, Chatgpt 5 et
Cybercriminals are taking over Facebook pages and using them to advertise fake generative artificial intelligence software loaded with malware.  According to researchers at the cybersecurity company Bitdefender, the cybercrooks are taking advantage of the popularity of new generative AI tools and using “malvertising” to impersonate legitimate products like Midjourney, Sora AI, ChatGPT 5 and]]>
2024-04-04T17:04:16+00:00 https://therecord.media/cybercriminals-plant-malware-facebook-ai-brands www.secnews.physaphae.fr/article.php?IdArticle=8476032 False Malware,Tool ChatGPT 2.0000000000000000
ProofPoint - Cyber Firms Le rapport du paysage de la perte de données 2024 explore la négligence et les autres causes communes de perte de données<br>The 2024 Data Loss Landscape Report Explores Carelessness and Other Common Causes of Data Loss 2024-03-19T05:00:28+00:00 https://www.proofpoint.com/us/blog/information-protection/2024-data-loss-landscape-report-dlp www.secnews.physaphae.fr/article.php?IdArticle=8466553 False Tool,Threat,Legislation,Cloud ChatGPT 3.0000000000000000 Dark Reading - Informationweek Branch Chatgpt déverse les secrets dans une nouvelle attaque POC<br>ChatGPT Spills Secrets in Novel PoC Attack Research is latest in a growing body of work to highlight troubling weaknesses in widely used generative AI tools.]]> 2024-03-13T21:59:23+00:00 https://www.darkreading.com/cyber-risk/researchers-develop-new-attack-for-extracting-secrets-from-chatgpt-other-genai-tools www.secnews.physaphae.fr/article.php?IdArticle=8463417 False Tool ChatGPT 2.0000000000000000 Dark Reading - Informationweek Branch Gemini AI de Google \\ vulnérable à la manipulation du contenu<br>Google\\'s Gemini AI Vulnerable to Content Manipulation Like ChatGPT and other GenAI tools, Gemini is susceptible to attacks that can cause it to divulge system prompts, reveal sensitive information, and execute potentially malicious actions.]]> 2024-03-12T10:00:00+00:00 https://www.darkreading.com/cyber-risk/google-gemini-vulnerable-to-content-manipulation-researchers-say www.secnews.physaphae.fr/article.php?IdArticle=8462551 False Tool ChatGPT 2.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Sécuriser l'IA<br>Securing AI AI governance  framework model like the NIST AI RMF to enable business innovation and manage risk is just as important as adopting guidelines to secure AI. Responsible AI starts with securing AI by design and securing AI with Zero Trust architecture principles. Vulnerabilities in ChatGPT A recent discovered vulnerability found in version gpt-3.5-turbo exposed identifiable information. The vulnerability was reported in the news late November 2023. By repeating a particular word continuously to the chatbot it triggered the vulnerability. A group of security researchers with Google DeepMind, Cornell University, CMU, UC Berkeley, ETH Zurich, and the University of Washington studied the “extractable memorization” of training data that an adversary can extract by querying a ML model without prior knowledge of the training dataset. The researchers’ report show an adversary can extract gigabytes of training data from open-source language models. In the vulnerability testing, a new developed divergence attack on the aligned ChatGPT caused the model to emit training data 150 times higher. Findings show larger and more capable LLMs are more vulnerable to data extraction attacks, emitting more memorized training data as the volume gets larger. While similar attacks have been documented with unaligned models, the new ChatGPT vulnerability exposed a successful attack on LLM models typically built with strict guardrails found in aligned models. This raises questions about best practices and methods in how AI systems could better secure LLM models, build training data that is reliable and trustworthy, and protect privacy. U.S. and UK’s Bilateral cybersecurity effort on securing AI The US Cybersecurity Infrastructure and Security Agency (CISA) and UK’s National Cyber Security Center (NCSC) in cooperation with 21 agencies and ministries from 18 other countries are supporting the first global guidelines for AI security. The new UK-led guidelines for securing AI as part of the U.S. and UK’s bilateral cybersecurity effort was announced at the end of November 2023. The pledge is an acknowledgement of AI risk by nation leaders and government agencies worldwide and is the beginning of international collaboration to ensure the safety and security of AI by design. The Department of Homeland Security (DHS) CISA and UK NCSC joint guidelines for Secure AI system Development aims to ensure cybersecurity decisions are embedded at every stage of the AI development lifecycle from the start and throughout, and not as an afterthought. Securing AI by design Securing AI by design is a key approach to mitigate cybersecurity risks and other vulnerabilities in AI systems. Ensuring the entire AI system development lifecycle process is secure from design to development, deployment, and operations and maintenance is critical to an organization realizing its full benefits. The guidelines documented in the Guidelines for Secure AI System Development aligns closely to software development life cycle practices defined in the NSCS’s Secure development and deployment guidance and the National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF). The 4 pillars that embody the Guidelines for Secure AI System Development offers guidance for AI providers of any systems whether newly created from the ground up or built on top of tools and services provided from]]> 2024-03-07T11:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/securing-ai www.secnews.physaphae.fr/article.php?IdArticle=8460259 False Tool,Vulnerability,Threat,Mobile,Medical,Cloud,Technical ChatGPT 2.0000000000000000 RiskIQ - cyber risk firms (now microsoft) Rester en avance sur les acteurs de la menace à l'ère de l'IA<br>Staying ahead of threat actors in the age of AI 2024-03-05T19:03:47+00:00 https://community.riskiq.com/article/ed40fbef www.secnews.physaphae.fr/article.php?IdArticle=8459485 False Ransomware,Malware,Tool,Vulnerability,Threat,Studies,Medical,Technical APT 28,ChatGPT,APT 4 2.0000000000000000 Techworm - News Microsoft et Openai disent que les pirates utilisent le chatppt pour les cyberattaques<br>Microsoft and OpenAI say hackers are using ChatGPT for Cyberattacks a rapporté que le typhon de charbon de bois de Chine \\ a utilisé ses services pour rechercher diverses entreprises et outils de cybersécurité, débogage du code et générer des scripts, et créer du contenu probable pour une utilisation dans les campagnes de phishing. Un autre exemple est la tempête de sable Crimson d'Iran \\, qui a utilisé des LLM pour générer des extraits de code liés au développement d'applications et de Web, générer du contenu probable pour les campagnes de phission de lance et pour une aide dans le développement du code pour échapper à la détection. En outre, Forest Blizzard, le groupe russe de l'État-nation, aurait utilisé des services OpenAI principalement pour la recherche open source sur les protocoles de communication par satellite et la technologie d'imagerie radar, ainsi que pour le soutien aux tâches de script. Openai a déclaré mercredi qu'il avait mis fin aux comptes OpenAI identifiés associés aux acteurs de pirate parrainés par l'État.Ces acteurs ont généralement cherché à utiliser les services OpenAI pour interroger les informations open source, traduire, trouver des erreurs de codage et exécuter des tâches de codage de base, a déclaré la société d'IA. «Le soutien linguistique est une caractéristique naturelle des LLM et est attrayante pour les acteurs de menace qui se concentrent continuellement sur l'ingénierie sociale et d'autres techniques qui s'appuient sur de fausses communications trompeuses adaptées à leurs cibles \\ ', des réseaux professionnels et d'autres relations.Surtout, nos recherches avec OpenAI n'ont pas identifié d'attaques significatives en utilisant les LLM que nous surveillons étroitement », lit le nouveau rapport de sécurité AI publié par Microsoft surMercredi en partenariat avec Openai. Heureusement, aucune attaque significative ou nouvelle, utilisant la technologie LLM n'a encore été détectée, selon la société.«Notre analyse de l'utilisation actuelle de la technologie LLM par les acteurs de la menace a révélé des comportements cohérents avec les attaquants utilisant l'IA comme autre outil de productivité.Microsoft et Openai n'ont pas encore observé des techniques d'attaque ou d'abus en particulier ou uniques en AI résultant des acteurs de la menace & # 8217;Utilisation de l'IA », a noté Microsoft dans son rapport. Pour répondre à la menace, Microsoft a annoncé un ensemble de principes façonnant sa politique et ses actions pour lutter contre l'abus de ses services d'IA par des menaces persistantes avancées (APT), des man]]> 2024-02-15T20:28:57+00:00 https://www.techworm.net/2024/02/hackers-chatgpt-ai-in-cyberattacks.html www.secnews.physaphae.fr/article.php?IdArticle=8450454 False Tool,Threat,Studies ChatGPT 2.0000000000000000 Global Security Mag - Site de news francais Metomic lance l'intégration de Chatgpt<br>Metomic Launches ChatGPT Integration revues de produits
Metomic Launches ChatGPT Integration To Help Businesses Take Full Advantage Of The Generative AI Tool Without Putting Sensitive Data At Risk Metomic for ChatGPT enables security leaders to boost productivity while monitoring data being uploaded to OpenAI\'s ChatGPT platform in real-time - Product Reviews]]>
2024-02-05T15:08:59+00:00 https://www.globalsecuritymag.fr/metomic-launches-chatgpt-integration.html www.secnews.physaphae.fr/article.php?IdArticle=8446961 False Tool ChatGPT 2.0000000000000000
HackRead - Chercher Cyber Des milliers de messages Web sombres exposent des plans d'abus de chatpt<br>Thousands of Dark Web Posts Expose ChatGPT Abuse Plans Par deeba ahmed Les cybercriminels font activement la promotion de l'abus de chatppt et de chatbots similaires, offrant une gamme d'outils malveillants, des logiciels malveillants aux kits de phishing. Ceci est un article de HackRead.com Lire la publication originale: Des milliers de messages Web sombres exposent des plans d'abus de chatppt
>By Deeba Ahmed Cybercriminals are actively promoting the abuse of ChatGPT and similar chatbots, offering a range of malicious tools from malware to phishing kits. This is a post from HackRead.com Read the original post: Thousands of Dark Web Posts Expose ChatGPT Abuse Plans]]>
2024-01-26T17:26:19+00:00 https://www.hackread.com/dark-web-posts-expose-chatgpt-abuse-plans/ www.secnews.physaphae.fr/article.php?IdArticle=8443482 False Malware,Tool ChatGPT 3.0000000000000000
The Register - Site journalistique Anglais Les avertissements NCSC de GCHQ \\ de la possibilité réaliste \\ 'AI aideront à détection d'évasion des logiciels malveillants soutenus par l'État<br>GCHQ\\'s NCSC warns of \\'realistic possibility\\' AI will help state-backed malware evade detection déjà démystifié .Cependant, un article Publié aujourd'hui par le Royaume-Uni National Cyber Security Center (NCSC) suggère qu'il existe une "possibilité réaliste" que d'ici 2025, les attaquants les plus sophistiqués \\ 's'amélioreront considérablement grâce aux modèles d'IA informés par des données décrivant une cyber-cyberHits.… ]]> 2024-01-24T06:26:08+00:00 https://go.theregister.com/feed/www.theregister.com/2024/01/24/ncsc/ www.secnews.physaphae.fr/article.php?IdArticle=8442422 False Malware,Tool ChatGPT 3.0000000000000000 Silicon - Site de News Francais ChatGPT : OpenAI prépare des outils contre la désinformation électorale 2024-01-16T11:14:05+00:00 https://www.silicon.fr/chatgpt-openai-prepare-des-outils-contre-la-desinformation-electorale-474993.html www.secnews.physaphae.fr/article.php?IdArticle=8439520 False Tool ChatGPT 3.0000000000000000 We Live Security - Editeur Logiciel Antivirus ESET Résultats clés du rapport de la menace ESET H2 2023 & # 8211;Semaine en sécurité avec Tony Anscombe<br>Key findings from ESET Threat Report H2 2023 – Week in security with Tony Anscombe How cybercriminals take advantage of the popularity of ChatGPT and other tools of its ilk to direct people to sketchy sites, plus other interesting findings from ESET\'s latest Threat Report]]> 2023-12-22T10:50:20+00:00 https://www.welivesecurity.com/en/videos/key-findings-eset-threat-report-h2-2023-week-security-tony-anscombe/ www.secnews.physaphae.fr/article.php?IdArticle=8427789 False Tool,Threat,Studies ChatGPT 4.0000000000000000 CompromisingPositions - Podcast Cyber Épisode 13: 5 Tire de Hot sur l'IA<br>EPISODE 13: 5 HOT TAKES ON AI 2023-12-21T00:00:00+00:00 https://www.compromisingpositions.co.uk/podcast/episode-13-five-hot-takes-on-ai www.secnews.physaphae.fr/article.php?IdArticle=8517011 False Tool ChatGPT 3.0000000000000000 CompromisingPositions - Podcast Cyber Épisode 12: Comment utiliser Chatgpt et l'IA pour améliorer votre fonction de cybersécurité<br>EPISODE 12: How to Use ChatGPT and AI to Level UP Your Cybersecurity function 2023-12-14T00:00:00+00:00 https://www.compromisingpositions.co.uk/podcast/episode-12-how-to-use-chatgpt-and-ai-to-level-up-your-cybersecurity-function www.secnews.physaphae.fr/article.php?IdArticle=8517012 False Tool ChatGPT 3.0000000000000000 BBC - BBC News - Technology Chatgpt Builder aide à créer des campagnes d'arnaque et de piratage<br>ChatGPT builder helps create scam and hack campaigns A cutting-edge tool from Open AI appears to be poorly moderated, allowing it to be abused by cyber-criminals.]]> 2023-12-07T00:04:10+00:00 https://www.bbc.co.uk/news/technology-67614065?at_medium=RSS&at_campaign=KARANGA www.secnews.physaphae.fr/article.php?IdArticle=8419726 False Hack,Tool ChatGPT 2.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Sécurité générative de l'IA: prévention de l'exposition aux données de Microsoft Copilot<br>Generative AI Security: Preventing Microsoft Copilot Data Exposure Microsoft Copilot has been called one of the most powerful productivity tools on the planet. Copilot is an AI assistant that lives inside each of your Microsoft 365 apps - Word, Excel, PowerPoint, Teams, Outlook, and so on. Microsoft\'s dream is to take the drudgery out of daily work and let humans focus on being creative problem-solvers. What makes Copilot a different beast than ChatGPT and]]> 2023-12-05T16:59:00+00:00 https://thehackernews.com/2023/12/generative-ai-security-preventing.html www.secnews.physaphae.fr/article.php?IdArticle=8419257 False Tool ChatGPT 3.0000000000000000 Checkpoint - Fabricant Materiel Securite L'information est le pouvoir, mais la désinformation est tout aussi puissante<br>Information is power, but misinformation is just as powerful Les techniques de désinformation et de manipulation employées par les cybercriminels deviennent de plus en plus sophistiquées en raison de la mise en œuvre de l'intelligence artificielle dans leurs systèmes que l'ère post-vérité a atteint de nouveaux sommets avec l'avènement de l'intelligence artificielle (IA).Avec la popularité croissante et l'utilisation d'outils d'IA génératifs tels que Chatgpt, la tâche de discerner entre ce qui est réel et faux est devenu plus compliqué, et les cybercriminels tirent parti de ces outils pour créer des menaces de plus en plus sophistiquées.Vérifier Pont Software Technologies a constaté qu'une entreprise sur 34 a connu une tentative d'attaque de ransomware au cours des trois premiers trimestres de 2023, une augmentation [& # 8230;]
>The disinformation and manipulation techniques employed by cybercriminals are becoming increasingly sophisticated due to the implementation of Artificial Intelligence in their systems The post-truth era has reached new heights with the advent of artificial intelligence (AI). With the increasing popularity and use of generative AI tools such as ChatGPT, the task of discerning between what is real and fake has become more complicated, and cybercriminals are leveraging these tools to create increasingly sophisticated threats. Check Pont Software Technologies has found that one in 34 companies have experienced an attempted ransomware attack in the first three quarters of 2023, an increase […] ]]>
2023-11-30T13:00:15+00:00 https://blog.checkpoint.com/artificial-intelligence/information-is-power-but-misinformation-is-just-as-powerful/ www.secnews.physaphae.fr/article.php?IdArticle=8418065 False Ransomware,Tool ChatGPT,ChatGPT 2.0000000000000000
WatchGuard - Fabricant Matériel et Logiciels Les prédictions cyber 2024 du Threat Lab WatchGuard 2023-11-29T00:00:00+00:00 https://www.watchguard.com/fr/wgrd-news/press-releases/manipulation-de-modeles-linguistiques-piratage-de-casques-vr-renouveau-des www.secnews.physaphae.fr/article.php?IdArticle=8417803 False Tool,Threat,Prediction ChatGPT,ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms Prédictions 2024 de Proofpoint \\: Brace for Impact<br>Proofpoint\\'s 2024 Predictions: Brace for Impact 2023-11-28T23:05:04+00:00 https://www.proofpoint.com/us/blog/ciso-perspectives/proofpoints-2024-predictions-brace-impact www.secnews.physaphae.fr/article.php?IdArticle=8417740 False Ransomware,Malware,Tool,Vulnerability,Threat,Mobile,Prediction,Prediction ChatGPT,ChatGPT 3.0000000000000000 InfoSecurity Mag - InfoSecurity Magazine Les cybercriminels hésitent à utiliser l'IA génératrice<br>Cybercriminals Hesitant About Using Generative AI An analysis of dark web forums revealed many threat actors are skeptical about using tools like ChatGPT to launch attacks]]> 2023-11-28T11:40:00+00:00 https://www.infosecurity-magazine.com/news/cyber-criminals-hesitant/ www.secnews.physaphae.fr/article.php?IdArticle=8417485 False Tool,Threat ChatGPT,ChatGPT 2.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Les solutions AI sont la nouvelle ombre IT<br>AI Solutions Are the New Shadow IT Ambitious Employees Tout New AI Tools, Ignore Serious SaaS Security RisksLike the SaaS shadow IT of the past, AI is placing CISOs and cybersecurity teams in a tough but familiar spot.  Employees are covertly using AI with little regard for established IT and cybersecurity review procedures. Considering ChatGPT\'s meteoric rise to 100 million users within 60 days of launch, especially with little]]> 2023-11-22T16:38:00+00:00 https://thehackernews.com/2023/11/ai-solutions-are-new-shadow-it.html www.secnews.physaphae.fr/article.php?IdArticle=8415868 False Tool,Cloud ChatGPT 3.0000000000000000 Korben - Bloger francais LM Studio – Pour faire tourner des LLMs en local et les utiliser directement dans votre code 2023-11-22T09:21:21+00:00 https://korben.info/lm-studio-local-llms-integration-code-usage.html www.secnews.physaphae.fr/article.php?IdArticle=8415915 False Tool ChatGPT 3.0000000000000000 Schneier on Security - Chercheur Cryptologue Américain Utilisation de l'IA génératrice pour la surveillance<br>Using Generative AI for Surveillance Exemple Il est utilisé pour l'analyse des sentiments.Je suppose que ce n'est pas encore très bon, mais qu'il ira mieux.
Generative AI is going to be a powerful tool for data analysis and summarization. Here’s an example of it being used for sentiment analysis. My guess is that it isn’t very good yet, but that it will get better.]]>
2023-11-20T11:57:37+00:00 https://www.schneier.com/blog/archives/2023/11/using-generative-ai-for-surveillance.html www.secnews.physaphae.fr/article.php?IdArticle=8414823 False Tool ChatGPT 3.0000000000000000
HackRead - Chercher Cyber Abrax666 malveillant AI Chatbot exposé comme arnaque potentielle<br>Malicious Abrax666 AI Chatbot Exposed as Potential Scam Par waqas abrax666 L'AI Chatbot est vanté par son développeur comme une alternative malveillante à Chatgpt, affirmant qu'il est un outil multitâche parfait pour les activités éthiques et contraires à l'éthique. . Ceci est un article de HackRead.com Lire la publication originale: Abrax666 malveillant AI Chatbot exposé comme escroquerie potentielle
>By Waqas Abrax666 AI Chatbot is being boasted by its developer as a malicious alternative to ChatGPT, claiming it\'s a perfect multitasking tool for both ethical and unethical activities. This is a post from HackRead.com Read the original post: Malicious Abrax666 AI Chatbot Exposed as Potential Scam]]>
2023-11-13T23:18:17+00:00 https://www.hackread.com/abrax666-ai-chatbot-exposed-as-potential-scam/ www.secnews.physaphae.fr/article.php?IdArticle=8411366 False Tool ChatGPT 2.0000000000000000
RiskIQ - cyber risk firms (now microsoft) Predator AI | ChatGPT-Powered Infostealer Takes Aim at Cloud Platforms #### Description SentinelLabs has identified a new Python-based infostealer and hacktool called \'Predator AI\' that is designed to target cloud services. Predator AI is advertised through Telegram channels related to hacking. The main purpose of Predator is to facilitate web application attacks against various commonly used technologies, including content management systems (CMS) like WordPress, as well as cloud email services like AWS SES. However, Predator is a multi-purpose tool, much like the AlienFox and Legion cloud spamming toolsets. These toolsets share considerable overlap in publicly available code that each repurposes for their brand\'s own use, including the use of Androxgh0st and Greenbot modules. The Predator AI developer implemented a ChatGPT-driven class into the Python script, which is designed to make the tool easier to use and to serve as a single text-driven interface between disparate features. There were several projects like BlackMamba that ultimately were more hype than the tool could deliver. Predator AI is a small step forward in this space: the actor is actively working on making a tool that can utilize AI. #### Reference URL(s) 1. https://www.sentinelone.com/labs/predator-ai-chatgpt-powered-infostealer-takes-aim-at-cloud-platforms/ #### Publication Date November 7, 2023 #### Author(s) Alex Delamotte ]]> 2023-11-08T18:59:39+00:00 https://community.riskiq.com/article/e5536969 www.secnews.physaphae.fr/article.php?IdArticle=8408039 False Tool,Cloud ChatGPT 2.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Guide: comment VCISOS, MSPS et MSSP peuvent protéger leurs clients des risques Gen AI<br>Guide: How vCISOs, MSPs and MSSPs Can Keep their Customers Safe from Gen AI Risks Download the free guide, "It\'s a Generative AI World: How vCISOs, MSPs and MSSPs Can Keep their Customers Safe from Gen AI Risks." ChatGPT now boasts anywhere from 1.5 to 2 billion visits per month. Countless sales, marketing, HR, IT executive, technical support, operations, finance and other functions are feeding data prompts and queries into generative AI engines. They use these tools to write]]> 2023-11-08T16:30:00+00:00 https://thehackernews.com/2023/11/guide-how-vcisos-msps-and-mssps-can.html www.secnews.physaphae.fr/article.php?IdArticle=8407813 False Tool,Technical ChatGPT 2.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) AI offensif et défensif: le chat (GPT) de \\<br>Offensive and Defensive AI: Let\\'s Chat(GPT) About It ChatGPT: Productivity tool, great for writing poems, and… a security risk?! In this article, we show how threat actors can exploit ChatGPT, but also how defenders can use it for leveling up their game. ChatGPT is the most swiftly growing consumer application to date. The extremely popular generative AI chatbot has the ability to generate human-like, coherent and contextually relevant responses.]]> 2023-11-07T15:51:00+00:00 https://thehackernews.com/2023/11/offensive-and-defensive-ai-lets-chatgpt.html www.secnews.physaphae.fr/article.php?IdArticle=8407178 False Tool,Threat ChatGPT 3.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Qui expérimente les outils d'IA dans votre organisation?<br>Who\\'s Experimenting with AI Tools in Your Organization? With the record-setting growth of consumer-focused AI productivity tools like ChatGPT, artificial intelligence-formerly the realm of data science and engineering teams-has become a resource available to every employee.  From a productivity perspective, that\'s fantastic. Unfortunately for IT and security teams, it also means you may have hundreds of people in your organization using a new tool in]]> 2023-10-23T17:04:00+00:00 https://thehackernews.com/2023/10/whos-experimenting-with-ai-tools-in.html www.secnews.physaphae.fr/article.php?IdArticle=8399384 False Tool ChatGPT 2.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Réévaluer les risques dans l'âge de l'intelligence artificielle<br>Re-evaluating risk in the artificial intelligence age 2023-10-17T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/re-evaluating-risk-in-the-artificial-intelligence-age www.secnews.physaphae.fr/article.php?IdArticle=8396640 False Malware,Tool,Vulnerability ChatGPT 4.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Renforcement de la cybersécurité: multiplication de force et efficacité de sécurité<br>Strengthening Cybersecurity: Force multiplication and security efficiency asymmetrical relationship. Within the cybersecurity realm, asymmetry has characterized the relationship between those safeguarding digital assets and those seeking to exploit vulnerabilities. Even within this context, where attackers are typically at a resource disadvantage, data breaches have continued to rise year after year as cyber threats adapt and evolve and utilize asymmetric tactics to their advantage.  These include technologies and tactics such as artificial intelligence (AI), and advanced social engineering tools. To effectively combat these threats, companies must rethink their security strategies, concentrating their scarce resources more efficiently and effectively through the concept of force multiplication. Asymmetrical threats, in the world of cybersecurity, can be summed up as the inherent disparity between adversaries and the tactics employed by the weaker party to neutralize the strengths of the stronger one. The utilization of AI and similar tools further erodes the perceived advantages that organizations believe they gain through increased spending on sophisticated security measures. Recent data from InfoSecurity Magazine, referencing the 2023 Checkpoint study, reveals a disconcerting trend: global cyberattacks increased by 7% between Q1 2022 and Q1 2023. While not significant at first blush, a deeper analysis reveals a more disturbing trend specifically that of the use of AI.  AI\'s malicious deployment is exemplified in the following quote from their research: "...we have witnessed several sophisticated campaigns from cyber-criminals who are finding ways to weaponize legitimate tools for malicious gains." Furthermore, the report highlights: "Recent examples include using ChatGPT for code generation that can help less-skilled threat actors effortlessly launch cyberattacks." As threat actors continue to employ asymmetrical strategies to render organizations\' substantial and ever-increasing security investments less effective, organizations must adapt to address this evolving threat landscape. Arguably, one of the most effective methods to confront threat adaptation and asymmetric tactics is through the concept of force multiplication, which enhances relative effectiveness with fewer resources consumed thereby increasing the efficiency of the security dollar. Efficiency, in the context of cybersecurity, refers to achieving the greatest cumulative effect of cybersecurity efforts with the lowest possible expenditure of resources, including time, effort, and costs. While the concept of efficiency may seem straightforward, applying complex technological and human resources effectively and in an efficient manner in complex domains like security demands more than mere calculations. This subject has been studied, modeled, and debated within the military community for centuries. Military and combat efficiency, a domain with a long history of analysis, ]]> 2023-10-16T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/strengthening-cybersecurity-force-multiplication-and-security-efficiency www.secnews.physaphae.fr/article.php?IdArticle=8396097 False Tool,Vulnerability,Threat,Studies,Prediction ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms Tendances modernes pour les menaces et risques d'initiés<br>Modern Trends for Insider Threats and Risks 2023-09-26T12:24:36+00:00 https://www.proofpoint.com/us/blog/insider-threat-management/decoding-modern-insider-threat-trends-and-risks www.secnews.physaphae.fr/article.php?IdArticle=8387947 False Tool,Threat ChatGPT,ChatGPT 2.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Regardez le webinaire - AI vs AI: exploitation des défenses de l'IA contre les risques alimentés par l'IA<br>Watch the Webinar - AI vs. AI: Harnessing AI Defenses Against AI-Powered Risks Generative AI is a double-edged sword, if there ever was one. There is broad agreement that tools like ChatGPT are unleashing waves of productivity across the business, from IT, to customer experience, to engineering. That\'s on the one hand.  On the other end of this fencing match: risk. From IP leakage and data privacy risks to the empowering of cybercriminals with AI tools, generative AI]]> 2023-09-25T17:11:00+00:00 https://thehackernews.com/2023/09/watch-webinar-ai-vs-ai-harnessing-ai.html www.secnews.physaphae.fr/article.php?IdArticle=8387573 False Tool ChatGPT,ChatGPT 2.0000000000000000 Schneier on Security - Chercheur Cryptologue Américain Détection du texte généré par l'AI<br>Detecting AI-Generated Text écriture: Les détecteurs AI fonctionnent-ils? En bref, no.Alors que certains (y compris OpenAI) ont publié des outils qui prétendent détecter le contenu généré par l'AI, aucun de ces éléments ne s'est révélé de manière fiable de distinguer de manière fiable entre le contenu généré par l'AI et l'homme. De plus, ChatGpt n'a pas & # 8220; connaissances & # 8221;de quel contenu pourrait être généré par l'AI.Il inventera parfois des réponses à des questions comme & # 8220; Avez-vous écrit cet [essai]? & # 8221;ou & # 8220; cela aurait-il pu être écrit par AI? & # 8221;Ces réponses sont aléatoires et n'ont aucune base en fait. ...
There are no reliable ways to distinguish text written by a human from text written by an large language model. OpenAI writes: Do AI detectors work? In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content. Additionally, ChatGPT has no “knowledge” of what content could be AI-generated. It will sometimes make up responses to questions like “did you write this [essay]?” or “could this have been written by AI?” These responses are random and have no basis in fact. ...]]>
2023-09-19T11:08:45+00:00 https://www.schneier.com/blog/archives/2023/09/detecting-ai-generated-text.html www.secnews.physaphae.fr/article.php?IdArticle=8385272 False Tool ChatGPT,ChatGPT 2.0000000000000000
ProofPoint - Cyber Firms L'avenir de l'autonomisation de la conscience de la cybersécurité: 5 cas d'utilisation pour une IA générative pour augmenter votre programme<br>The Future of Empowering Cybersecurity Awareness: 5 Use Cases for Generative AI to Boost Your Program 2023-09-15T09:50:31+00:00 https://www.proofpoint.com/us/blog/security-awareness-training/future-empowering-cybersecurity-awareness-5-use-cases-generative-ai www.secnews.physaphae.fr/article.php?IdArticle=8386768 False Tool,Vulnerability,Threat ChatGPT,ChatGPT 2.0000000000000000 Global Security Mag - Site de news francais Sophos : Les escroqueries de type CryptoRom ajoutent à leur arsenal des outils de discussion alimentés par l\'IA, à l\'image de ChatGPT, et simulent des piratages sur des comptes crypto Malwares]]> 2023-09-14T08:57:19+00:00 https://www.globalsecuritymag.fr/Sophos-Les-escroqueries-de-type-CryptoRom-ajoutent-a-leur-arsenal-des-outils-de.html www.secnews.physaphae.fr/article.php?IdArticle=8382579 False Tool ChatGPT 2.0000000000000000 Schneier on Security - Chercheur Cryptologue Américain LLMS et utilisation des outils<br>LLMs and Tool Use GPT-4 a été publié , des chercheurs de Microsoft tranquillement annoncés Un plan pour compiler des millions d'API & # 8212;Outils qui peuvent tout faire, de la commande d'une pizza à la résolution d'équations de physique au contrôle du téléviseur dans votre salon & # 8212; en un recueil qui serait rendu accessible aux modèles de grande langue (LLM).Ce n'était qu'une étape importante dans la course à travers l'industrie et le monde universitaire pour trouver le meilleur ways à Enseigner llms comment manipuler les outils, ce qui suralimenterait le potentiel de l'IA plus que n'importe lequel des impressionnantsavancées que nous avons vues à ce jour ...
Last March, just two weeks after GPT-4 was released, researchers at Microsoft quietly announced a plan to compile millions of APIs—tools that can do everything from ordering a pizza to solving physics equations to controlling the TV in your living room—into a compendium that would be made accessible to large language models (LLMs). This was just one milestone in the race across industry and academia to find the best ways to teach LLMs how to manipulate tools, which would supercharge the potential of AI more than any of the impressive advancements we’ve seen to date...]]>
2023-09-08T11:05:08+00:00 https://www.schneier.com/blog/archives/2023/09/ai-tool-use.html www.secnews.physaphae.fr/article.php?IdArticle=8380388 False Tool ChatGPT 2.0000000000000000
Dark Reading - Informationweek Branch Chef de la sécurité de la NFL: L'IA générative menace une préoccupation alors que la nouvelle saison démarre<br>NFL Security Chief: Generative AI Threats a Concern as New Season Kicks Off Deepfake videos and audio of NFL players and phishing communications via ChatGPT-like tools are a worry, the NFL\'s CISO says.]]> 2023-09-07T18:40:00+00:00 https://www.darkreading.com/attacks-breaches/generative-ai-threats-a-concern-for-nfl-security-chief-as-new-season-kicks-off www.secnews.physaphae.fr/article.php?IdArticle=8380180 False Tool ChatGPT 2.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Garder les réglementations de cybersécurité en tête pour une utilisation génératrice de l'IA<br>Keeping cybersecurity regulations top of mind for generative AI use got ChatGPT to write polymorphic malware, despite protections intended to prevent this kind of application. Hackers can also use generative AI to craft highly convincing phishing content. Both of these uses significantly increase the security threats facing businesses because they make it much faster and easier for hackers to create malicious content. Risk of data and IP exposure Generative AI algorithms are developed with machine learning, so they learn from every interaction they have. Every prompt becomes part of the algorithm and informs future output. As a result, the AI may “remember” any information a user includes in their prompts. Generative AI can also put a business’s intellectual property at risk. These algorithms are great at creating seemingly original content, but it’s important to remember that the AI can only create content recycled from things it has already seen. Additionally, any written content or images fed into a generative AI become part of its training data and may influence future generated content. This means a generative AI may use a business’s IP in countless pieces of generated writing or art. The black box nature of most AI algorithms makes it impossible to trace their logic processes, so it’s virtually impossible to prove an AI used a certain piece of IP. Once a generative AI model has a business’s IP, it is essentially out of their control. Risk of compromised training data One cybersecurity risk unique to AI is “poisoned” training datasets. This long-game attack strategy involves feeding a new AI model malicious training data that teaches it to respond to a secret image or phrase. Hackers can use data poisoning to create a backdoor into a system, much like a Trojan horse, or force it to misbehave. Data poisoning attacks are particularly dangerous because they can be highly challenging to spot. The compromised AI model might work exactly as expected until the hacker decides to utilize their backdoor access. Using generative AI within security regulations While generative AI has some cybersecurity risks, it is possible to use it effectively while complying with regulations. Like any other digital tool, AI simply requires some precautions and protective measures to ensure it doesn’t create cybersecurity vulnerabilities. A few essential steps can help businesses accomplish this. Understand all relevant regulations Staying compli]]> 2023-09-06T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/keeping-cybersecurity-regulations-top-of-mind-for-generative-ai-use www.secnews.physaphae.fr/article.php?IdArticle=8379621 False Malware,Tool ChatGPT,ChatGPT 2.0000000000000000 Global Security Mag - Site de news francais 82% des entreprises françaises envisagent d\'interdire ChatGPT et les applications d\'IA générative sur les appareils professionnels RGPD]]> 2023-08-28T12:09:14+00:00 https://www.globalsecuritymag.fr/82-des-entreprises-francaises-envisagent-d-interdire-ChatGPT-et-les.html www.secnews.physaphae.fr/article.php?IdArticle=8375670 False Tool ChatGPT,ChatGPT 3.0000000000000000 Veracode - Application Security Research, News, and Education Blog Amélioration de la sécurité du code avec une AI générative: Utilisation de la correction de Veracode pour sécuriser le code généré par Chatgpt<br>Enhancing Code Security with Generative AI: Using Veracode Fix to Secure Code Generated by ChatGPT Artificial Intelligence (AI) and companion coding can help developers write software faster than ever. However, as companies look to adopt AI-powered companion coding, they must be aware of the strengths and limitations of different approaches – especially regarding code security.   Watch this 4-minute video to see a developer generate insecure code with ChatGPT, find the flaw with static analysis, and secure it with Veracode Fix to quickly develop a function without writing any code.  The video above exposes the nuances of generative AI code security. While generalist companion coding tools like ChatGPT excel at creating functional code, the quality and security of the code often falls short. Specialized solutions like Veracode Fix built to excel at remediation insecure code bring a vital security skillset to generative AI. By using generalist and specialist AI tools in collaboration, organizations can empower their teams to accelerate software development that meets functional and…]]> 2023-08-17T13:01:00+00:00 https://www.veracode.com/blog/secure-development/enhancing-code-security-generative-ai-using-veracode-fix-secure-code www.secnews.physaphae.fr/article.php?IdArticle=8371867 False Tool ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services Le rôle de l'AI \\ dans la cybersécurité: Black Hat USA 2023 révèle comment les grands modèles de langage façonnent l'avenir des attaques de phishing et de la défense<br>AI\\'s Role in Cybersecurity: Black Hat USA 2023 Reveals How Large Language Models Are Shaping the Future of Phishing Attacks and Defense  Rôle Ai \\ dans la cybersécurité: Black Hat USA 2023 révèle la façon dont les modèles de langue façonnentL'avenir des attaques de phishing et de la défense à Black Hat USA 2023, une session dirigée par une équipe de chercheurs en sécurité, dont Fredrik Heiding, Bruce Schneier, Arun Vishwanath et Jeremy Bernstein, ont dévoilé une expérience intrigante.Ils ont testé de grands modèles de langue (LLM) pour voir comment ils ont fonctionné à la fois dans l'écriture de courriels de phishing convaincants et les détecter.Ceci est le PDF document technique . L'expérience: l'élaboration des e-mails de phishing L'équipe a testé quatre LLM commerciaux, y compris le chatppt de l'Openai \\, Bard de Google \\, Claude \\ de Google et Chatllama, dans des attaques de phishing expérimentales contre les étudiants de Harvard.L'expérience a été conçue pour voir comment la technologie de l'IA pouvait produire des leurres de phishing efficaces. Heriding, chercheur à Harvard, a souligné qu'une telle technologie a déjà eu un impact sur le paysage des menaces en facilitant la création de courriels de phishing.Il a dit: "GPT a changé cela. Vous n'avez pas besoin d'être un orateur anglais natif, vous n'avez pas besoin de faire beaucoup. Vous pouvez entrer une invite rapide avec seulement quelques points de données." L'équipe a envoyé des e-mails de phishing offrant des cartes-cadeaux Starbucks à 112 étudiants, en comparant Chatgpt avec un modèle non AI appelé V-Triad.Les résultats ont montré que l'e-mail V-Triad était le plus efficace, avec un taux de clic de 70%, suivi d'une combinaison V-Triad-Chatgpt à 50%, Chatgpt à 30% et le groupe témoin à 20%. Cependant, dans une autre version du test, Chatgpt a fonctionné beaucoup mieux, avec un taux de clic de près de 50%, tandis que la combinaison V-Triad-Chatgpt a mené avec près de 80%.Heriding a souligné qu'un LLM non formé et à usage général a pu créer rapidement des attaques de phishing très efficaces. Utilisation de LLMS pour la détection de phishing La deuxième partie de l'expérience s'est concentrée sur l'efficacité des LLM pour déterminer l'intention des e-mails suspects.L'équipe a utilisé les e-mails de Starbucks de la première partie de l'expérience et a demandé aux LLM de déterminer l'intention, qu'elle ait été composée par un humain ou une IA, d'identifier tout aspect suspect et d'offrir des conseils sur la façon de répondre. Les résultats étaient à la fois surprenants et encourageants.Les modèles avaient des taux de réussite élevés dans l'identification des e-mails marketing, mais ont eu des difficultés avec l'intention des e-mails de phishing V-Triad et Chatgpt.Ils se sont mieux comportés lorsqu'ils sont chargés d'identifier le contenu suspect, les résultats de Claude \\ étant mis en évidence pour non seulement pour obtenir des résultats élevés dans les tests de détection mais aussi fournir des conseils judicieux pour les utilisateurs. La puissance de phishing de LLMS Dans l'ensemble, Heriding a conclu que les LLMS prêtesété formé sur toutes les données de sécurité.Il a déclaré: "C'est vraiment quelque chose que tout le monde peut utiliser en ce moment. C'est assez puissant." L'expér]]> 2023-08-10T18:39:58+00:00 https://blog.knowbe4.com/ais-role-in-cybersecurity-black-hat-usa-2023-reveals-how-large-language-models-are-shaping-the-future-of-phishing-attacks-and-defense www.secnews.physaphae.fr/article.php?IdArticle=8368532 False Tool,Threat ChatGPT,ChatGPT 2.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Code Mirage: Comment les cybercriminels exploitent le code halluciné AI pour les machinations malveillantes<br>Code Mirage: How cyber criminals harness AI-hallucinated code for malicious machinations AI-hallucinations: Free inkblot rorschach-test rorschach test vector Artificial intelligence (AI) hallucinations, as described [2], refer to confident responses generated by AI systems that lack justification based on their training data. Similar to human psychological hallucinations, AI hallucinations involve the AI system providing information or responses that are not supported by the available data. However, in the context of AI, hallucinations are associated with unjustified responses or beliefs rather than false percepts. This phenomenon gained attention around 2022 with the introduction of large language models like ChatGPT, where users observed instances of seemingly random but plausible-sounding falsehoods being generated. By 2023, it was acknowledged that frequent hallucinations in AI systems posed a significant challenge for the field of language models. The exploitative process: Cybercriminals begin by deliberately publishing malicious packages under commonly hallucinated names produced by large language machines (LLMs) such as ChatGPT within trusted repositories. These package names closely resemble legitimate and widely used libraries or utilities, such as the legitimate package ‘arangojs’ vs the hallucinated package ‘arangodb’ as shown in the research done by Vulcan [1]. The trap unfolds: Free linked connected network vector When developers, unaware of the malicious intent, utilize AI-based tools or large language models (LLMs) to generate code snippets for their projects, they inadvertently can fall into a trap. The AI-generated code snippets can include imaginary unpublished libraries, enabling cybercriminals to publish commonly used AI-generated imaginary package names. As a result, developers unknowingly import malicious packages into their projects, introducing vulnerabilities, backdoors, or other malicious functionalities that compromise the security and integrity of the software and possibly other projects. Implications for developers: The exploitation of AI-generated hallucinated package names poses significant risks to developers and their projects. Here are some key implications: Trusting familiar package names: Developers commonly rely on package names they recognize to introduce code snippets into their projects. The presence of malicious packages under commonly hallucinated names makes it increasingly difficult to distinguish between legitimate and malicious options when relying on the trust from AI generated code. Blind trust in AI-generated code: Many develo]]> 2023-08-02T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/code-mirage-how-cyber-criminals-harness-ai-hallucinated-code-for-malicious-machinations www.secnews.physaphae.fr/article.php?IdArticle=8364676 False Tool APT 15,ChatGPT,ChatGPT 3.0000000000000000 Bleeping Computer - Magazine Américain Les cybercriminels forment des chatbots d'IA pour le phishing, des attaques de logiciels malveillants<br>Cybercriminals train AI chatbots for phishing, malware attacks In the wake of WormGPT, a ChatGPT clone trained on malware-focused data, a new generative artificial intelligence hacking tool called FraudGPT has emerged, and at least another one is under development that is allegedly based on Google\'s AI experiment, Bard. [...]]]> 2023-08-01T10:08:16+00:00 https://www.bleepingcomputer.com/news/security/cybercriminals-train-ai-chatbots-for-phishing-malware-attacks/ www.secnews.physaphae.fr/article.php?IdArticle=8364314 False Malware,Tool ChatGPT,ChatGPT 3.0000000000000000 McAfee Labs - Editeur Logiciel AI dans la nature: applications malveillantes des outils d'IA traditionnels<br>AI in the Wild: Malicious Applications of Mainstream AI Tools Ce n'est pas tous des limericks drôles, des portraits bizarres et des sketchs viraux hilarants.Chatgpt, Bard, Dall-E, Craiyon, Voice.ai et un hôte entier ...
> It\'s not all funny limericks, bizarre portraits, and hilarious viral skits. ChatGPT, Bard, DALL-E, Craiyon, Voice.ai, and a whole host... ]]>
2023-07-28T08:46:25+00:00 https://www.mcafee.com/blogs/internet-security/ai-in-the-wild-malicious-applications-of-mainstream-ai-tools/ www.secnews.physaphae.fr/article.php?IdArticle=8362647 False Tool ChatGPT 3.0000000000000000
Dark Reading - Informationweek Branch \\ 'fraudegpt \\' chatbot malveillant maintenant à vendre sur Dark Web<br>\\'FraudGPT\\' Malicious Chatbot Now for Sale on Dark Web The subscription-based, generative AI-driven offering joins a growing trend toward "generative AI jailbreaking" to create ChatGPT copycat tools for cyberattacks.]]> 2023-07-25T16:39:24+00:00 https://www.darkreading.com/threat-intelligence/fraudgpt-malicious-chatbot-for-sale-dark-web www.secnews.physaphae.fr/article.php?IdArticle=8361316 False Tool,Prediction ChatGPT,ChatGPT 2.0000000000000000 Checkpoint - Fabricant Materiel Securite Comment l'IA générative affecte la sécurité mobile<br>How Generative AI Affects Mobile Security moi: "Pouvez-vous être utilisé à mauvais escient pour les cyberattaques?"Chat GPT: «En tant que modèle de langue AI, je n'ai pas d'agence ou d'intentions à moi.Cependant, comme toute technologie, je peux potentiellement être utilisé à mauvais escient par des personnes malveillantes.En fin de compte, la responsabilité de l'utilisation et de l'utilisation abusive potentielle des systèmes d'IA incombe aux utilisateurs.Il est important que les individus et les organisations aient été importants d'utiliser la technologie de l'IA éthiquement et de manière responsable, d'adhérer aux directives juridiques et éthiques, et de prendre des précautions pour éviter tout préjudice ou abus. »Dans cette conversation, le chatppt n'est pas faux… des outils génératifs de l'intelligence artificielle (IA) tels que le chat GPT et Google Bard [& # 8230;]
>Me: “Can you be misused for cyber-attacks?” Chat GPT: “As an AI language model, I don’t have agency or intentions of my own. However, like any technology, I can potentially be misused by individuals with malicious intent. Ultimately, the responsibility for the use and potential misuse of AI systems lies with the users. It’s important for individuals and organizations to use AI technology ethically and responsibly, adhering to legal and ethical guidelines, and taking precautions to prevent any harm or misuse.” In this conversation, ChatGPT is not wrong… Generative artificial intelligence (AI) tools such as Chat GPT and Google Bard […] ]]>
2023-07-25T14:00:25+00:00 https://blog.checkpoint.com/artificial-intelligence/how-generative-ai-affects-mobile-security/ www.secnews.physaphae.fr/article.php?IdArticle=8361275 False Tool ChatGPT,ChatGPT 2.0000000000000000
Dark Reading - Informationweek Branch Infosec ne sait pas quels outils aiment les orgs utilisent<br>Infosec Doesn\\'t Know What AI Tools Orgs Are Using Hint: Organizations are already using a range of AI tools, with ChatGPT and Jasper.ai leading the way.]]> 2023-07-20T00:00:00+00:00 https://www.darkreading.com/tech-trends/infosec-doesnt-know-what-ai-tools-orgs-are-using www.secnews.physaphae.fr/article.php?IdArticle=8359097 False Tool ChatGPT,ChatGPT 3.0000000000000000 Dark Reading - Informationweek Branch CheckMarx annonce le plugin Checkai pour Chatgpt pour détecter et empêcher les attaques contre le code généré par ChatGpt<br>Checkmarx Announces CheckAI Plugin for ChatGPT to Detect and Prevent Attacks Against ChatGPT-Generated Code Checkmarx\'s industry-first AI AppSec plugin works within the ChatGPT interface to protect against new attack types targeting GenAI-generated code.]]> 2023-07-19T22:46:00+00:00 https://www.darkreading.com/attacks-breaches/checkmarx-announces-checkai-plugin-for-chatgpt-to-detect-and-prevent-attacks-against-chatgpt-generated-code www.secnews.physaphae.fr/article.php?IdArticle=8359008 True Tool ChatGPT,ChatGPT 2.0000000000000000 Recorded Future - FLux Recorded Future Par les criminels, pour les criminels: l'outil AI génère facilement des e-mails de fraude remarquablement persuasifs \\ '<br>By criminals, for criminals: AI tool easily generates \\'remarkably persuasive\\' fraud emails Un outil d'intelligence artificielle promu sur les forums souterrains montre comment l'IA peut aider à affiner les opérations de cybercriminalité, selon les chercheurs.Le logiciel de Wormpt est offert «comme alternative Blackhat» aux outils d'IA commerciaux comme Chatgpt, Selon les analystes à la société de sécurité par e-mail Slashnext.Les chercheurs ont utilisé Wormpt pour générer le type de contenu qui pourrait faire partie
An artificial intelligence tool promoted on underground forums shows how AI can help refine cybercrime operations, researchers say. The WormGPT software is offered “as a blackhat alternative” to commercial AI tools like ChatGPT, according to analysts at email security company SlashNext. The researchers used WormGPT to generate the kind of content that could be part]]>
2023-07-17T19:49:00+00:00 https://therecord.media/ai-tool-generates-bec-fraud www.secnews.physaphae.fr/article.php?IdArticle=8357912 False Tool ChatGPT 3.0000000000000000
SlashNext - Cyber Firm Wormgpt & # 8211;Les cybercriminels d'outils d'IA génératifs utilisent pour lancer des attaques de compromis par e-mail commercial<br>WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks Dans cet article de blog, nous nous plongeons dans l'utilisation émergente de l'IA générative, y compris le chatppt d'Openai, et l'outil de cybercriminalité Wormpt, dans les attaques de compromis par courrier électronique (BEC).Soulignant de vrais cas des forums de cybercriminalité, le post plonge dans la mécanique de ces attaques, les risques inhérents posés par les e-mails de phishing dirigés par l'IA et les avantages uniques de [& # 8230;] The Post wormgpt & #8211;Les cybercriminels d'outils d'IA génératifs utilisent pour lancer des attaques de compromis par courrier électronique commercial apparu pour la première fois sur slashnext .
>In this blog post, we delve into the emerging use of generative AI, including OpenAI’s ChatGPT, and the cybercrime tool WormGPT, in Business Email Compromise (BEC) attacks. Highlighting real cases from cybercrime forums, the post dives into the mechanics of these attacks, the inherent risks posed by AI-driven phishing emails, and the unique advantages of […] The post WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks first appeared on SlashNext.]]>
2023-07-13T13:00:49+00:00 https://slashnext.com/blog/wormgpt-the-generative-ai-tool-cybercriminals-are-using-to-launch-business-email-compromise-attacks/ www.secnews.physaphae.fr/article.php?IdArticle=8386743 False Tool ChatGPT 3.0000000000000000
AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Chatgpt, le nouveau canard en caoutchouc<br>ChatGPT, the new rubber duck 2023-07-06T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/chatgpt-the-new-rubber-duck www.secnews.physaphae.fr/article.php?IdArticle=8352835 False Tool ChatGPT,ChatGPT 2.0000000000000000 TrendLabs Security - Editeur Antivirus Chatgpt Liens partagés et protection de l'information: les risques et mesures les organisations doivent comprendre<br>ChatGPT Shared Links and Information Protection: Risks and Measures Organizations Must Understand Since its initial release in late 2022, the AI-powered text generation tool known as ChatGPT has been experiencing rapid adoption rates from both organizations and individual users. However, its latest feature, known as Shared Links, comes with the potential risk of unintentional disclosure of confidential information.]]> 2023-07-05T00:00:00+00:00 https://www.trendmicro.com/en_us/research/23/g/chatgpt-shared-links-and-information-protection.html www.secnews.physaphae.fr/article.php?IdArticle=8352475 False Tool ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 26 [Eyes Open] La FTC révèle les cinq dernières escroqueries par SMS<br>CyberheistNews Vol 13 #26 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams CyberheistNews Vol 13 #26 CyberheistNews Vol 13 #26  |   June 27th, 2023 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams The U.S. Federal Trade Commission (FTC) has published a data spotlight outlining the most common text message scams. Phony bank fraud prevention alerts were the most common type of text scam last year. "Reports about texts impersonating banks are up nearly tenfold since 2019 with median reported individual losses of $3,000 last year," the report says. These are the top five text scams reported by the FTC: Copycat bank fraud prevention alerts Bogus "gifts" that can cost you Fake package delivery problems Phony job offers Not-really-from-Amazon security alerts "People get a text supposedly from a bank asking them to call a number ASAP about suspicious activity or to reply YES or NO to verify whether a transaction was authorized. If they reply, they\'ll get a call from a phony \'fraud department\' claiming they want to \'help get your money back.\' What they really want to do is make unauthorized transfers. "What\'s more, they may ask for personal information like Social Security numbers, setting people up for possible identity theft." Fake gift card offers took second place, followed by phony package delivery problems. "Scammers understand how our shopping habits have changed and have updated their sleazy tactics accordingly," the FTC says. "People may get a text pretending to be from the U.S. Postal Service, FedEx, or UPS claiming there\'s a problem with a delivery. "The text links to a convincing-looking – but utterly bogus – website that asks for a credit card number to cover a small \'redelivery fee.\'" Scammers also target job seekers with bogus job offers in an attempt to steal their money and personal information. "With workplaces in transition, some scammers are using texts to perpetrate old-school forms of fraud – for example, fake \'mystery shopper\' jobs or bogus money-making offers for driving around with cars wrapped in ads," the report says. "Other texts target people who post their resumes on employment websites. They claim to offer jobs and even send job seekers checks, usually with instructions to send some of the money to a different address for materials, training, or the like. By the time the check bounces, the person\'s money – and the phony \'employer\' – are long gone." Finally, scammers impersonate Amazon and send fake security alerts to trick victims into sending money. "People may get what looks like a message from \'Amazon,\' asking to verify a big-ticket order they didn\'t place," the FTC says. "Concerned ]]> 2023-06-27T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-26-eyes-open-the-ftc-reveals-the-latest-top-five-text-message-scams www.secnews.physaphae.fr/article.php?IdArticle=8349704 False Ransomware,Spam,Malware,Hack,Tool,Threat FedEx,APT 28,APT 15,ChatGPT,ChatGPT 2.0000000000000000