www.secnews.physaphae.fr This is the RSS 2.0 feed from www.secnews.physaphae.fr. IT's a simple agragated flow of multiple articles soruces. Liste of sources, can be found on www.secnews.physaphae.fr. 2025-05-10T18:23:49+00:00 www.secnews.physaphae.fr InformationSecurityBuzzNews - Site de News Securite S'attaquer à la menace du cyber-risque pendant l'adoption de l'IA<br>Tackling the threat of cyber risk during AI adoption Ever since AI\'s meteoric rise to prominence following the release of ChatGPT in November 2022, the technology has been at the centre of international debate. For every application in healthcare, education, and workplace efficiency, reports of abuse by cybercriminals for phishing campaigns, automating attacks, and ransomware have made mainstream news.  Regardless of whether individuals and [...]]]> 2025-03-17T06:49:07+00:00 https://informationsecuritybuzz.com/threat-of-cyber-risk-during-ai-adoptio/ www.secnews.physaphae.fr/article.php?IdArticle=8656157 False Ransomware,Threat,Medical ChatGPT 2.0000000000000000 ProofPoint - Cyber Firms Arrêt de cybersécurité du mois: Capital One Credential Phishing-How Les cybercriminels ciblent votre sécurité financière<br>Cybersecurity Stop of the Month: Capital One Credential Phishing-How Cybercriminals Are Targeting Your Financial Security 2025-02-25T02:00:04+00:00 https://www.proofpoint.com/us/blog/email-and-cloud-threats/capital-one-phishing-email-campaign www.secnews.physaphae.fr/article.php?IdArticle=8651011 False Malware,Tool,Threat,Prediction,Medical,Cloud,Commercial ChatGPT 3.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Ce que les prestataires de soins de santé devraient faire après une violation de données médicales<br>What Healthcare Providers Should Do After A Medical Data Breach 2023 Cost of a Data Breach report reveals. But data breaches aren’t just expensive, they also harm patient privacy, damage organizational reputation, and erode patient trust in healthcare providers. As data breaches are now largely a matter of “when” not “if”, it’s important to devise a solid data breach response plan. By acting fast to prevent further damage and data loss, you can restore operations as quickly as possible with minimal harm done. Contain the Breach Once a breach has been detected, you need to act fast to contain it, so it doesn’t spread. That means disconnecting the affected system from the network, but not turning it off altogether as your forensic team still needs to investigate the situation. Simply unplug the network cable from the router to disconnect it from the internet. If your antivirus scanner has found malware or a virus on the system, quarantine it, so it can be analyzed later. Keep the firewall settings as they are and save all firewall and security logs. You can also take screenshots if needed. It’s also smart to change all access control login details. Strong complex passwords are a basic cybersecurity feature difficult for hackers and software to crack. It’s still important to record old passwords for future investigation. Also, remember to deactivate less-important accounts. Document the Breach You then need to document the breach, so forensic investigators can find out what caused it, as well as recommend accurate next steps to secure the network now and prevent future breaches. So, in your report, explain how you came to hear of the breach and relay exactly what was stated in the notification (including the date and time you were notified). Also, document every step you took in response to the breach. This includes the date and time you disconnected systems from the network and changed account credentials and passwords. If you use artificial intelligence (AI) tools, you’ll also need to consider whether they played a role in the breach, and document this if so. For example, ChatGPT, a popular chatbot and virtual assistant, can successfully exploit zero-day security vulnerabilities 87% of the time, a recent study by researchers at the University of Illinois Urbana-Champaign found. Although AI is increasingly used in healthcare to automate tasks, manage patient data, and even make tailored care recommendations, it does pose a serious risk to patient data integrity despite the other benefits it provides. So, assess whether AI influenced your breach at all, so your organization can make changes as needed to better prevent data breaches in the future. Report the Breach Although your first instinct may be to keep the breach under wraps, you’re actually legally required to report it. Under the ]]> 2024-07-23T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/what-healthcare-providers-should-do-after-a-medical-data-breach www.secnews.physaphae.fr/article.php?IdArticle=8542852 False Data Breach,Malware,Tool,Vulnerability,Threat,Studies,Medical ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms Quelle est la meilleure façon d'arrêter la perte de données Genai?Adopter une approche centrée sur l'homme<br>What\\'s the Best Way to Stop GenAI Data Loss? Take a Human-Centric Approach 2024-05-01T05:12:14+00:00 https://www.proofpoint.com/us/blog/information-protection/whats-best-way-stop-genai-data-loss-take-human-centric-approach www.secnews.physaphae.fr/article.php?IdArticle=8491708 False Tool,Medical,Cloud ChatGPT 3.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Sécuriser l'IA<br>Securing AI AI governance  framework model like the NIST AI RMF to enable business innovation and manage risk is just as important as adopting guidelines to secure AI. Responsible AI starts with securing AI by design and securing AI with Zero Trust architecture principles. Vulnerabilities in ChatGPT A recent discovered vulnerability found in version gpt-3.5-turbo exposed identifiable information. The vulnerability was reported in the news late November 2023. By repeating a particular word continuously to the chatbot it triggered the vulnerability. A group of security researchers with Google DeepMind, Cornell University, CMU, UC Berkeley, ETH Zurich, and the University of Washington studied the “extractable memorization” of training data that an adversary can extract by querying a ML model without prior knowledge of the training dataset. The researchers’ report show an adversary can extract gigabytes of training data from open-source language models. In the vulnerability testing, a new developed divergence attack on the aligned ChatGPT caused the model to emit training data 150 times higher. Findings show larger and more capable LLMs are more vulnerable to data extraction attacks, emitting more memorized training data as the volume gets larger. While similar attacks have been documented with unaligned models, the new ChatGPT vulnerability exposed a successful attack on LLM models typically built with strict guardrails found in aligned models. This raises questions about best practices and methods in how AI systems could better secure LLM models, build training data that is reliable and trustworthy, and protect privacy. U.S. and UK’s Bilateral cybersecurity effort on securing AI The US Cybersecurity Infrastructure and Security Agency (CISA) and UK’s National Cyber Security Center (NCSC) in cooperation with 21 agencies and ministries from 18 other countries are supporting the first global guidelines for AI security. The new UK-led guidelines for securing AI as part of the U.S. and UK’s bilateral cybersecurity effort was announced at the end of November 2023. The pledge is an acknowledgement of AI risk by nation leaders and government agencies worldwide and is the beginning of international collaboration to ensure the safety and security of AI by design. The Department of Homeland Security (DHS) CISA and UK NCSC joint guidelines for Secure AI system Development aims to ensure cybersecurity decisions are embedded at every stage of the AI development lifecycle from the start and throughout, and not as an afterthought. Securing AI by design Securing AI by design is a key approach to mitigate cybersecurity risks and other vulnerabilities in AI systems. Ensuring the entire AI system development lifecycle process is secure from design to development, deployment, and operations and maintenance is critical to an organization realizing its full benefits. The guidelines documented in the Guidelines for Secure AI System Development aligns closely to software development life cycle practices defined in the NSCS’s Secure development and deployment guidance and the National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF). The 4 pillars that embody the Guidelines for Secure AI System Development offers guidance for AI providers of any systems whether newly created from the ground up or built on top of tools and services provided from]]> 2024-03-07T11:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/securing-ai www.secnews.physaphae.fr/article.php?IdArticle=8460259 False Tool,Vulnerability,Threat,Mobile,Medical,Cloud,Technical ChatGPT 2.0000000000000000 RiskIQ - cyber risk firms (now microsoft) Rester en avance sur les acteurs de la menace à l'ère de l'IA<br>Staying ahead of threat actors in the age of AI 2024-03-05T19:03:47+00:00 https://community.riskiq.com/article/ed40fbef www.secnews.physaphae.fr/article.php?IdArticle=8459485 False Ransomware,Malware,Tool,Vulnerability,Threat,Studies,Medical,Technical APT 28,ChatGPT,APT 4 2.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC L'intersection de la télésanté, de l'IA et de la cybersécurité<br>The intersection of telehealth, AI, and Cybersecurity customer identity and access management (CIAM) software. CIAM software that uses AI can utilize digital identity solutions to automate the registration and patient service process. This is important, as most patients say that they’d rather resolve their own questions and queries on their own before speaking to a service agent. Self-service features even allow patients to share important third-party data with telehealth systems via IoT tech like smartwatches. AI-integrated CIAM software is interoperable, too. This means that patients and providers can connect to the CIAM using omnichannel pathways. As a result, users can use data from multiple systems within the same telehealth digital ecosystem. However, this omnichannel approach to the healthcare consumer journey still needs to be HIPAA compliant and protect patient privacy. Medicine and diagnoses Misdiagnoses are more common than most people realize. In the US, 12 million people are misdiagnosed every year. Diagnoses may be even more tricky via telehealth, as doctors can’t read patients\' body language or physically inspect their symptoms. AI can improve the accuracy of diagnoses by leveraging machine learning algorithms during the decision-making process. These programs can be taught how to distinguish between different types of diseases and may point doctors in the right direction. Preliminary findings suggest that this can improve the accuracy of medical diagnoses to 99.5%. Automated programs can help patients maintain their medicine and re-order repeat prescriptions. This is particularly important for rural patients who are unable to visit the doctor\'s office and may have limited time to call in. As a result, telehealth portals that use AI to automate the process help providers close the rural-urban divide. Ethical considerations AI has clear benefits in telehealth. However, machine learning programs and automated platforms do put patient data at i]]> 2023-05-23T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/the-intersection-of-telehealth-ai-and-cybersecurity www.secnews.physaphae.fr/article.php?IdArticle=8338681 False Medical ChatGPT,ChatGPT 2.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Partager les données de votre entreprise avec Chatgpt: à quel point est-elle risquée?<br>Sharing your business\\'s data with ChatGPT: How risky is it? learns from the data it ingests. If this information includes your sensitive business data, then sharing it with ChatGPT could potentially be risky and lead to cybersecurity concerns. For example, what if you feed ChatGPT pre-earnings company financial information, company proprietary software codeor materials used for internal presentations without realizing that practically anybody could obtain that sensitive information just by asking ChatGPT about it? If you use your smartphone to engage with ChatGPT, then a smartphone security breach could be all it takes to access your ChatGPT query history. In light of these implications, let\'s discuss if - and how - ChatGPT stores its users\' input data, as well as potential risks you may face when sharing sensitive business data with ChatGPT. Does ChatGPT store users’ input data? The answer is complicated. While ChatGPT does not automatically add data from queries to models specifically to make this data available for others to query, any prompt does become visible to OpenAI, the organization behind the large language model. Although no membership inference attacks have yet been carried out against the large language learning models that drive ChatGPT, databases containing saved prompts as well as embedded learnings could be potentially compromised by a cybersecurity breach. OpenAI, the parent company that developed ChatGPT, is working with other companies to limit the general access that language learning models have to personal data and sensitive information. But the technology is still in its nascent developing stages - ChatGPT was only just released to the public in November of last year. By just two months into its public release, ChatGPT had been accessed by over 100 million users, making it the fastest-growing consumer app ever at record-breaking speeds. With such rapid growth and expansion, regulations have been slow to keep up. The user base is so broad that there are abundant security gaps and vulnerabilities throughout the model. Risks of sharing business data with ChatGPT In June 2021, researchers from Apple, Stanford University, Google, Harvard University, and others published a paper that revealed that GPT-2, a language learning model similar to ChatGPT, could accurately recall sensitive information from training documents. The report found that GPT-2 could call up information with specific personal identifiers, recreate exact sequences of text, and provide other sensitive information when prompted. These “training data extraction attacks” could present a growing threat to the security of researchers working on machine learning models, as hackers may be able to access machine learning researcher data and steal their protected intellectual property. One data security company called Cyberhaven has released reports of ChatGPT cybersecurity vulnerabilities it has recently prevented. According to the reports, Cyberhaven has identified and prevented insecure requ]]> 2023-05-22T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/sharing-your-businesss-data-with-chatgpt-how-risky-is-it www.secnews.physaphae.fr/article.php?IdArticle=8338360 False Tool,Threat,Medical ChatGPT,ChatGPT 2.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Le rôle de l'IA dans les soins de santé: révolutionner l'industrie des soins de santé<br>The role of AI in healthcare: Revolutionizing the healthcare industry DeepMind a montré une précision similaire par rapport aux radiologues humains dans l'identification du cancer du sein. & nbsp; Médecine personnalisée: L'IA peut être utilisée pour générer des informations sur les biomarqueurs, les informations génétiques, les allergies et les évaluations psychologiques pour personnaliser le meilleur traitement des patients. . Ces données peuvent être utilisées pour prédire comment le patient réagira à divers cours de traitement pour une certaine condition.Cela peut minimiser les effets indésirables et réduire les coûts des options de traitement inutiles ou coûteuses.De même, il peut être utilisé pour traiter les troubles génétiques avec des plans de traitement personnalisés.Par exemple, Genomics profonde est une entreprise utilisant des systèmes d'IA pour développer des traitements personnalisés pour les troubles génétiques. Diagnostic de la maladie: Les systèmes d'IA peuvent être utilisés pour analyser les données des patients, y compris les antécédents médicaux et les résultats des tests pour établir un diagnostic plus précis et précoce des conditions mortelles comme le cancer.Par exemple, Pfizer a collaboré avec différents services basés sur l'IA pour diagnostiquer les maladies et IBM Watson utilise les PNL et les algorithmes d'apprentissage automatique pour l'oncologie dans l'élaboration de plans de traitement pour les patients atteints de cancer. Découverte de médicaments: L'IA peut être utilisée en R & amp; D pour la découverte de médicaments, ce qui rend le processus plus rapidement.L'IA peut supprimer certaines ]]> 2023-05-01T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/the-role-of-ai-in-healthcare-revolutionizing-the-healthcare-industry www.secnews.physaphae.fr/article.php?IdArticle=8332591 False Prediction,Medical ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services CyberheistNews Vol 13 #11 [Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears CyberheistNews Vol 13 #11 CyberheistNews Vol 13 #11  |   March 14th, 2023 [Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears Robert Lemos at DARKReading just reported on a worrying trend. The title said it all, and the news is that more than 4% of employees have put sensitive corporate data into the large language model, raising concerns that its popularity may result in massive leaks of proprietary information. Yikes. I'm giving you a short extract of the story and the link to the whole article is below. "Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models, and that information could be retrieved at a later date if proper data security isn't in place for the service. "In a recent report, data security service Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential info, client data, source code, or regulated information to the LLM. "In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company. "And as more employees use ChatGPT and other AI-based services as productivity tools, the risk will grow, says Howard Ting, CEO of Cyberhaven. "'There was this big migration of data from on-prem to cloud, and the next big shift is going to be the migration of data into these generative apps," he says. "And how that plays out [remains to be seen] - I think, we're in pregame; we're not even in the first inning.'" Your employees need to be stepped through new-school security awareness training so that they understand the risks of doing things like this. Blog post with links:https://blog.knowbe4.com/employees-are-feeding-sensitive-biz-data-to-chatgpt-raising-security-fears [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blockl]]> 2023-03-14T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-11-heads-up-employees-are-feeding-sensitive-biz-data-to-chatgpt-raising-security-fears www.secnews.physaphae.fr/article.php?IdArticle=8318404 False Ransomware,Data Breach,Spam,Malware,Threat,Guideline,Medical ChatGPT,ChatGPT 2.0000000000000000