www.secnews.physaphae.fr This is the RSS 2.0 feed from www.secnews.physaphae.fr. IT's a simple agragated flow of multiple articles soruces. Liste of sources, can be found on www.secnews.physaphae.fr. 2025-05-10T20:10:19+00:00 www.secnews.physaphae.fr RiskIQ - cyber risk firms (now microsoft) Une mise à jour sur la perturbation des utilisations trompeuses de l'IA<br>An update on disrupting deceptive uses of AI 2024-10-16T19:15:03+00:00 https://community.riskiq.com/article/e46070dd www.secnews.physaphae.fr/article.php?IdArticle=8598902 False Malware,Tool,Vulnerability,Threat,Studies ChatGPT 2.0000000000000000 Checkpoint - Fabricant Materiel Securite Déverrouillez la puissance de Genai avec Check Point Software Technologies<br>Unlock the Power of GenAI with Check Point Software Technologies La révolution de Genai est déjà ici des applications génératrices d'IA comme Chatgpt et Gemini sont là pour rester.Mais comme ils facilitent la vie des utilisateurs beaucoup plus simples, ils rendent également la vie de votre organisation beaucoup plus difficile.Bien que certaines organisations aient carrément interdit les applications Genai, selon un point de contrôle et l'étude Vason Bourne, 92% des organisations permettent à leurs employés d'utiliser des outils Genai, mais sont préoccupés par la sécurité et la fuite de données.En fait, une estimation indique que 55% des événements de fuite de données sont le résultat direct de l'utilisation du Genai.Comme des tâches comme le code de débogage et le texte de raffinage peuvent désormais être achevés en fraction [& # 8230;]
>The GenAI Revolution is Already Here Generative AI applications like ChatGPT and Gemini are here to stay. But as they make users\' lives much simpler, they also make your organization\'s life much harder. While some organizations have outright banned GenAI applications, according to a Check Point and Vason Bourne study, 92% of organizations allow their employees to use GenAI tools yet are concerned about security and data leakage. In fact, one estimate says 55% of data leakage events are a direct result of GenAI usage. As tasks like debugging code and refining text can now be completed in a fraction […] ]]>
2024-08-07T13:00:47+00:00 https://blog.checkpoint.com/artificial-intelligence/unlock-the-power-of-genai-with-check-point-software-technologies/ www.secnews.physaphae.fr/article.php?IdArticle=8553399 False Tool,Studies ChatGPT 3.0000000000000000
RiskIQ - cyber risk firms (now microsoft) Les attaques d'escroquerie profitent de la popularité de la vague de l'IA générative<br>Scam Attacks Taking Advantage of the Popularity of the Generative AI Wave 2024-07-26T19:24:17+00:00 https://community.riskiq.com/article/2826e7d7 www.secnews.physaphae.fr/article.php?IdArticle=8544990 True Ransomware,Spam,Malware,Tool,Threat,Studies ChatGPT 3.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Ce que les prestataires de soins de santé devraient faire après une violation de données médicales<br>What Healthcare Providers Should Do After A Medical Data Breach 2023 Cost of a Data Breach report reveals. But data breaches aren’t just expensive, they also harm patient privacy, damage organizational reputation, and erode patient trust in healthcare providers. As data breaches are now largely a matter of “when” not “if”, it’s important to devise a solid data breach response plan. By acting fast to prevent further damage and data loss, you can restore operations as quickly as possible with minimal harm done. Contain the Breach Once a breach has been detected, you need to act fast to contain it, so it doesn’t spread. That means disconnecting the affected system from the network, but not turning it off altogether as your forensic team still needs to investigate the situation. Simply unplug the network cable from the router to disconnect it from the internet. If your antivirus scanner has found malware or a virus on the system, quarantine it, so it can be analyzed later. Keep the firewall settings as they are and save all firewall and security logs. You can also take screenshots if needed. It’s also smart to change all access control login details. Strong complex passwords are a basic cybersecurity feature difficult for hackers and software to crack. It’s still important to record old passwords for future investigation. Also, remember to deactivate less-important accounts. Document the Breach You then need to document the breach, so forensic investigators can find out what caused it, as well as recommend accurate next steps to secure the network now and prevent future breaches. So, in your report, explain how you came to hear of the breach and relay exactly what was stated in the notification (including the date and time you were notified). Also, document every step you took in response to the breach. This includes the date and time you disconnected systems from the network and changed account credentials and passwords. If you use artificial intelligence (AI) tools, you’ll also need to consider whether they played a role in the breach, and document this if so. For example, ChatGPT, a popular chatbot and virtual assistant, can successfully exploit zero-day security vulnerabilities 87% of the time, a recent study by researchers at the University of Illinois Urbana-Champaign found. Although AI is increasingly used in healthcare to automate tasks, manage patient data, and even make tailored care recommendations, it does pose a serious risk to patient data integrity despite the other benefits it provides. So, assess whether AI influenced your breach at all, so your organization can make changes as needed to better prevent data breaches in the future. Report the Breach Although your first instinct may be to keep the breach under wraps, you’re actually legally required to report it. Under the ]]> 2024-07-23T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/what-healthcare-providers-should-do-after-a-medical-data-breach www.secnews.physaphae.fr/article.php?IdArticle=8542852 False Data Breach,Malware,Tool,Vulnerability,Threat,Studies,Medical ChatGPT 3.0000000000000000 Security Intelligence - Site de news Américain Chatgpt 4 peut exploiter 87% des vulnérabilités d'une journée<br>ChatGPT 4 can exploit 87% of one-day vulnerabilities Depuis l'utilisation généralisée et croissante de Chatgpt et d'autres modèles de grande langue (LLM) ces dernières années, la cybersécurité a été une préoccupation majeure.Parmi les nombreuses questions, les professionnels de la cybersécurité se sont demandé à quel point ces outils ont été efficaces pour lancer une attaque.Les chercheurs en cybersécurité Richard Fang, Rohan Bindu, Akul Gupta et Daniel Kang ont récemment réalisé une étude à [& # 8230;]
>Since the widespread and growing use of ChatGPT and other large language models (LLMs) in recent years, cybersecurity has been a top concern. Among the many questions, cybersecurity professionals wondered how effective these tools were in launching an attack. Cybersecurity researchers Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang recently performed a study to […] ]]>
2024-07-01T13:00:00+00:00 https://securityintelligence.com/articles/chatgpt-4-exploits-87-percent-one-day-vulnerabilities/ www.secnews.physaphae.fr/article.php?IdArticle=8529232 False Tool,Vulnerability,Threat,Studies ChatGPT 3.0000000000000000
RiskIQ - cyber risk firms (now microsoft) Rester en avance sur les acteurs de la menace à l'ère de l'IA<br>Staying ahead of threat actors in the age of AI 2024-03-05T19:03:47+00:00 https://community.riskiq.com/article/ed40fbef www.secnews.physaphae.fr/article.php?IdArticle=8459485 False Ransomware,Malware,Tool,Vulnerability,Threat,Studies,Medical,Technical APT 28,ChatGPT,APT 4 2.0000000000000000 Techworm - News Microsoft et Openai disent que les pirates utilisent le chatppt pour les cyberattaques<br>Microsoft and OpenAI say hackers are using ChatGPT for Cyberattacks a rapporté que le typhon de charbon de bois de Chine \\ a utilisé ses services pour rechercher diverses entreprises et outils de cybersécurité, débogage du code et générer des scripts, et créer du contenu probable pour une utilisation dans les campagnes de phishing. Un autre exemple est la tempête de sable Crimson d'Iran \\, qui a utilisé des LLM pour générer des extraits de code liés au développement d'applications et de Web, générer du contenu probable pour les campagnes de phission de lance et pour une aide dans le développement du code pour échapper à la détection. En outre, Forest Blizzard, le groupe russe de l'État-nation, aurait utilisé des services OpenAI principalement pour la recherche open source sur les protocoles de communication par satellite et la technologie d'imagerie radar, ainsi que pour le soutien aux tâches de script. Openai a déclaré mercredi qu'il avait mis fin aux comptes OpenAI identifiés associés aux acteurs de pirate parrainés par l'État.Ces acteurs ont généralement cherché à utiliser les services OpenAI pour interroger les informations open source, traduire, trouver des erreurs de codage et exécuter des tâches de codage de base, a déclaré la société d'IA. «Le soutien linguistique est une caractéristique naturelle des LLM et est attrayante pour les acteurs de menace qui se concentrent continuellement sur l'ingénierie sociale et d'autres techniques qui s'appuient sur de fausses communications trompeuses adaptées à leurs cibles \\ ', des réseaux professionnels et d'autres relations.Surtout, nos recherches avec OpenAI n'ont pas identifié d'attaques significatives en utilisant les LLM que nous surveillons étroitement », lit le nouveau rapport de sécurité AI publié par Microsoft surMercredi en partenariat avec Openai. Heureusement, aucune attaque significative ou nouvelle, utilisant la technologie LLM n'a encore été détectée, selon la société.«Notre analyse de l'utilisation actuelle de la technologie LLM par les acteurs de la menace a révélé des comportements cohérents avec les attaquants utilisant l'IA comme autre outil de productivité.Microsoft et Openai n'ont pas encore observé des techniques d'attaque ou d'abus en particulier ou uniques en AI résultant des acteurs de la menace & # 8217;Utilisation de l'IA », a noté Microsoft dans son rapport. Pour répondre à la menace, Microsoft a annoncé un ensemble de principes façonnant sa politique et ses actions pour lutter contre l'abus de ses services d'IA par des menaces persistantes avancées (APT), des man]]> 2024-02-15T20:28:57+00:00 https://www.techworm.net/2024/02/hackers-chatgpt-ai-in-cyberattacks.html www.secnews.physaphae.fr/article.php?IdArticle=8450454 False Tool,Threat,Studies ChatGPT 2.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Cybersécurité post-pandémique: leçons de la crise mondiale de la santé<br>Post-pandemic Cybersecurity: Lessons from the global health crisis a whopping 50% increase in the amount of attempted breaches. The transition to remote work, outdated healthcare organization technology, the adoption of AI bots in the workplace, and the presence of general uncertainty and fear led to new opportunities for bad actors seeking to exploit and benefit from this global health crisis. In this article, we will take a look at how all of this impacts the state of cybersecurity in the current post-pandemic era, and what conclusions can be drawn. New world, new vulnerabilities Worldwide lockdowns led to a rise in remote work opportunities, which was a necessary adjustment to allow employees to continue to earn a living. However, the sudden shift to the work-from-home format also caused a number of challenges and confusion for businesses and remote employees alike. The average person didn’t have the IT department a couple of feet away, so they were forced to fend for themselves. Whether it was deciding whether to use a VPN or not, was that email really a phishing one, or even just plain software updates, everybody had their hands full. With employers busy with training programs, threat actors began intensifying their ransomware-related efforts, resulting in a plethora of high-profile incidents in the last couple of years. A double-edged digital sword If the pandemic did one thing, it’s making us more reliant on both software and digital currencies. You already know where we’re going with this—it’s fertile ground for cybercrime. Everyone from the Costa Rican government to Nvidia got hit. With the dominance of Bitcoin as a payment method in ransoming, tracking down perpetrators is infinitely more difficult than it used to be. The old adage holds more true than ever - an ounce of prevention is worth a pound of cure. To make matters worse, amongst all that chaos, organizations also had to pivot away from vulnerable, mainstream software solutions. Even if it’s just choosing a new image editor or integrating a PDF SDK, it’s an increasing burden for businesses that are already trying to modernize or simply maintain. Actors strike where we’re most vulnerable Healthcare organizations became more important than ever during the global coronavirus pandemic. But this time also saw unprecedented amounts of cybersecurity incidents take place as bad actors exploited outdated cybersecurity measures. The influx of sudden need caused many overburdened healthcare organizations to lose track of key cybersecurity protocols that could help shore up gaps in the existing protective measures. The United States healthcare industry saw a 25% spike in successful data breaches during the pandemic, which resulted in millions of dollars of damages and the loss of privacy for thousands of patients whose data was compromis]]> 2023-12-27T11:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/post-pandemic-cybersecurity-lessons-from-the-global-health-crisis www.secnews.physaphae.fr/article.php?IdArticle=8429741 False Data Breach,Vulnerability,Threat,Studies,Prediction ChatGPT 2.0000000000000000 We Live Security - Editeur Logiciel Antivirus ESET Résultats clés du rapport de la menace ESET H2 2023 & # 8211;Semaine en sécurité avec Tony Anscombe<br>Key findings from ESET Threat Report H2 2023 – Week in security with Tony Anscombe How cybercriminals take advantage of the popularity of ChatGPT and other tools of its ilk to direct people to sketchy sites, plus other interesting findings from ESET\'s latest Threat Report]]> 2023-12-22T10:50:20+00:00 https://www.welivesecurity.com/en/videos/key-findings-eset-threat-report-h2-2023-week-security-tony-anscombe/ www.secnews.physaphae.fr/article.php?IdArticle=8427789 False Tool,Threat,Studies ChatGPT 4.0000000000000000 Global Security Mag - Site de news francais Le rapport sur l'état du phishing de Slashnext \\'s 2023 révèle une augmentation de 1 265% des e-mails de phishing depuis le lancement de Chatgpt en novembre 2022, signalant une nouvelle ère de cybercriminalité alimentée par une IA générative<br>SlashNext\\'s 2023 State of Phishing Report Reveals a 1,265% Increase in Phishing Emails Since the Launch of ChatGPT in November 2022, Signaling a New Era of Cybercrime Fueled by Generative AI rapports spéciaux
SlashNext\'s 2023 State of Phishing Report Reveals a 1,265% Increase in Phishing Emails Since the Launch of ChatGPT in November 2022, Signaling a New Era of Cybercrime Fueled by Generative AI Researchers observed a 967% increase in credential phishing attempts YoY, the number one access point to organizational breaches - Special Reports]]>
2023-10-30T19:10:36+00:00 https://www.globalsecuritymag.fr/SlashNext-s-2023-State-of-Phishing-Report-Reveals-a-1-265-Increase-in-Phishing.html www.secnews.physaphae.fr/article.php?IdArticle=8403070 False Studies ChatGPT 4.0000000000000000
InfoSecurity Mag - InfoSecurity Magazine Signaler les liens Chatgpt à une augmentation de 1265% des e-mails de phishing<br>Report Links ChatGPT to 1265% Rise in Phishing Emails The SlashNext report also found a noteworthy 967% increase in credential phishing attacks]]> 2023-10-30T16:30:00+00:00 https://www.infosecurity-magazine.com/news/chatgpt-linked-rise-phishing/ www.secnews.physaphae.fr/article.php?IdArticle=8402900 False Studies ChatGPT 4.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Renforcement de la cybersécurité: multiplication de force et efficacité de sécurité<br>Strengthening Cybersecurity: Force multiplication and security efficiency asymmetrical relationship. Within the cybersecurity realm, asymmetry has characterized the relationship between those safeguarding digital assets and those seeking to exploit vulnerabilities. Even within this context, where attackers are typically at a resource disadvantage, data breaches have continued to rise year after year as cyber threats adapt and evolve and utilize asymmetric tactics to their advantage.  These include technologies and tactics such as artificial intelligence (AI), and advanced social engineering tools. To effectively combat these threats, companies must rethink their security strategies, concentrating their scarce resources more efficiently and effectively through the concept of force multiplication. Asymmetrical threats, in the world of cybersecurity, can be summed up as the inherent disparity between adversaries and the tactics employed by the weaker party to neutralize the strengths of the stronger one. The utilization of AI and similar tools further erodes the perceived advantages that organizations believe they gain through increased spending on sophisticated security measures. Recent data from InfoSecurity Magazine, referencing the 2023 Checkpoint study, reveals a disconcerting trend: global cyberattacks increased by 7% between Q1 2022 and Q1 2023. While not significant at first blush, a deeper analysis reveals a more disturbing trend specifically that of the use of AI.  AI\'s malicious deployment is exemplified in the following quote from their research: "...we have witnessed several sophisticated campaigns from cyber-criminals who are finding ways to weaponize legitimate tools for malicious gains." Furthermore, the report highlights: "Recent examples include using ChatGPT for code generation that can help less-skilled threat actors effortlessly launch cyberattacks." As threat actors continue to employ asymmetrical strategies to render organizations\' substantial and ever-increasing security investments less effective, organizations must adapt to address this evolving threat landscape. Arguably, one of the most effective methods to confront threat adaptation and asymmetric tactics is through the concept of force multiplication, which enhances relative effectiveness with fewer resources consumed thereby increasing the efficiency of the security dollar. Efficiency, in the context of cybersecurity, refers to achieving the greatest cumulative effect of cybersecurity efforts with the lowest possible expenditure of resources, including time, effort, and costs. While the concept of efficiency may seem straightforward, applying complex technological and human resources effectively and in an efficient manner in complex domains like security demands more than mere calculations. This subject has been studied, modeled, and debated within the military community for centuries. Military and combat efficiency, a domain with a long history of analysis, ]]> 2023-10-16T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/strengthening-cybersecurity-force-multiplication-and-security-efficiency www.secnews.physaphae.fr/article.php?IdArticle=8396097 False Tool,Vulnerability,Threat,Studies,Prediction ChatGPT 3.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Certification professionnelle de la gouvernance de l'intelligence artificielle - AIGP<br>Artificial Intelligence Governance Professional Certification - AIGP 2023-07-24T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/artificial-intelligence-governance-professional-certification-aigp www.secnews.physaphae.fr/article.php?IdArticle=8360707 False Studies ChatGPT 2.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Nouvelles recherches: 6% des employés colleront des données sensibles dans les outils Genai comme Chatgpt<br>New Research: 6% of Employees Paste Sensitive Data into GenAI tools as ChatGPT The revolutionary technology of GenAI tools, such as ChatGPT, has brought significant risks to organizations\' sensitive data. But what do we really know about this risk? A new research by Browser Security company LayerX sheds light on the scope and nature of these risks. The report titled "Revealing the True GenAI Data Exposure Risk" provides crucial insights for data protection stakeholders and]]> 2023-06-15T17:28:00+00:00 https://thehackernews.com/2023/06/new-research-6-of-employees-paste.html www.secnews.physaphae.fr/article.php?IdArticle=8345727 False Studies ChatGPT,ChatGPT 5.0000000000000000 01net. Actualites - Securite - Magazine Francais ChatGPT : OpenAI confirme avoir rencontré un " important problème " ChatGPT a rencontré un gros bug. L'historique de conversations de certains utilisateurs a été transmis à d'autres internautes. OpenAI est rapidement monté au créneau pour corriger le tir, mais l'incident ravive nos préoccupations en matière de données personnelles…]]> 2023-03-23T10:30:23+00:00 https://www.01net.com/actualites/chatgpt-openai-confirme-rencontre-important-probleme.html www.secnews.physaphae.fr/article.php?IdArticle=8320852 False Studies ChatGPT 3.0000000000000000 Global Security Mag - Site de news francais ChatGPT4 : Une première analyse de la sécurité menée par Check Point Research (CPR) conclut à l\'existence de scénarios susceptibles d\'entraîner une accélération de la cybercriminalité Marchés]]> 2023-03-17T13:12:23+00:00 https://www.globalsecuritymag.fr/ChatGPT4-Une-premiere-analyse-de-la-securite-menee-par-Check-Point-Research-CPR.html www.secnews.physaphae.fr/article.php?IdArticle=8319411 False Studies ChatGPT 3.0000000000000000