What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
ProofPoint.webp 2024-05-01 05:12:14 Quelle est la meilleure façon d'arrêter la perte de données Genai?Adopter une approche centrée sur l'homme
What\\'s the Best Way to Stop GenAI Data Loss? Take a Human-Centric Approach
(lien direct)
Chief information security officers (CISOs) face a daunting challenge as they work to integrate generative AI (GenAI) tools into business workflows. Robust data protection measures are important to protect sensitive data from being leaked through GenAI tools. But CISOs can\'t just block access to GenAI tools entirely. They must find ways to give users access because these tools increase productivity and drive innovation. Unfortunately, legacy data loss prevention (DLP) tools can\'t help with achieving the delicate balance between security and usability.   Today\'s release of Proofpoint DLP Transform changes all that. It provides a modern alternative to legacy DLP tools in a single, economically attractive package. Its innovative features help CISOs strike the right balance between protecting data and usability. It\'s the latest addition to our award-winning DLP solution, which was recognized as a 2024 Gartner® Peer Insights™ Customers\' Choice for Data Loss Prevention. Proofpoint was the only vendor that placed in the upper right “Customers\' Choice” Quadrant.  In this blog, we\'ll dig into some of our latest research about GenAI and data loss risks. And we\'ll explain how Proofpoint DLP Transform provides you with a human-centric approach to reduce those risks.  GenAI increases data loss risks  Users can make great leaps in productivity with ChatGPT and other GenAI tools. However, GenAI also introduces a new channel for data loss. Employees often enter confidential data into these tools as they use them to expedite their tasks.   Security pros are worried, too. Recent Proofpoint research shows that:  Generative AI is the fastest-growing area of concern for CISOs  59% of board members believe that GenAI is a security risk for their business  “Browsing GenAI sites” is one of the top five alert scenarios configured by companies that use Proofpoint Information Protection  Valuable business data like mergers and acquisitions (M&A) documents, supplier contracts, and price lists are listed as the top data to protect   A big problem faced by CISOs is that legacy DLP tools can\'t capture user behavior and respond to natural language processing-based user interfaces. This leaves security gaps. That\'s why they often use blunt tools like web filtering to block employees from using GenAI apps altogether.   You can\'t enforce acceptable use policies for GenAI if you don\'t understand your content and how employees are interacting with it. If you want your employees to use these tools without putting your data security at risk, you need to take a human-centric approach to data loss.  A human-centric approach stops data loss  With a human-centric approach, you can detect data loss risk across endpoints and cloud apps like Microsoft 365, Google Workspace and Salesforce with speed. Insights into user intent allow you to move fast and take the right steps to respond to data risk.  Proofpoint DLP Transform takes a human-centric approach to solving the security gaps with GenAI. It understands employee behavior as well as the data that they are handling. It surgically allows and disallows employees to use GenAI tools such as OpenAI ChatGPT and Google Gemini based on employee behavior and content inputs, even if the data has been manipulated or has gone through multiple channels (email, web, endpoint or cloud) before reaching it.   Proofpoint DLP Transform accurately identifies sensitive content using classical content and LLM-powered data classifiers and provides deep visibility into user behavior. This added context enables analysts to reach high-fidelity verdicts about data risk across all key channels including email, cloud, and managed and unmanaged endpoints.  With a unified console and powerful analytics, Proofpoint DLP Transform can accelerate incident resolution natively or as part of the security operations (SOC) ecosystem. It is built on a cloud-native architecture and features modern privacy controls. Its lightweight and highly stable user-mode agent is unique in Tool Medical Cloud ChatGPT ★★★
AlienVault.webp 2024-03-07 11:00:00 Sécuriser l'IA
Securing AI
(lien direct)
With the proliferation of AI/ML enabled technologies to deliver business value, the need to protect data privacy and secure AI/ML applications from security risks is paramount. An AI governance  framework model like the NIST AI RMF to enable business innovation and manage risk is just as important as adopting guidelines to secure AI. Responsible AI starts with securing AI by design and securing AI with Zero Trust architecture principles. Vulnerabilities in ChatGPT A recent discovered vulnerability found in version gpt-3.5-turbo exposed identifiable information. The vulnerability was reported in the news late November 2023. By repeating a particular word continuously to the chatbot it triggered the vulnerability. A group of security researchers with Google DeepMind, Cornell University, CMU, UC Berkeley, ETH Zurich, and the University of Washington studied the “extractable memorization” of training data that an adversary can extract by querying a ML model without prior knowledge of the training dataset. The researchers’ report show an adversary can extract gigabytes of training data from open-source language models. In the vulnerability testing, a new developed divergence attack on the aligned ChatGPT caused the model to emit training data 150 times higher. Findings show larger and more capable LLMs are more vulnerable to data extraction attacks, emitting more memorized training data as the volume gets larger. While similar attacks have been documented with unaligned models, the new ChatGPT vulnerability exposed a successful attack on LLM models typically built with strict guardrails found in aligned models. This raises questions about best practices and methods in how AI systems could better secure LLM models, build training data that is reliable and trustworthy, and protect privacy. U.S. and UK’s Bilateral cybersecurity effort on securing AI The US Cybersecurity Infrastructure and Security Agency (CISA) and UK’s National Cyber Security Center (NCSC) in cooperation with 21 agencies and ministries from 18 other countries are supporting the first global guidelines for AI security. The new UK-led guidelines for securing AI as part of the U.S. and UK’s bilateral cybersecurity effort was announced at the end of November 2023. The pledge is an acknowledgement of AI risk by nation leaders and government agencies worldwide and is the beginning of international collaboration to ensure the safety and security of AI by design. The Department of Homeland Security (DHS) CISA and UK NCSC joint guidelines for Secure AI system Development aims to ensure cybersecurity decisions are embedded at every stage of the AI development lifecycle from the start and throughout, and not as an afterthought. Securing AI by design Securing AI by design is a key approach to mitigate cybersecurity risks and other vulnerabilities in AI systems. Ensuring the entire AI system development lifecycle process is secure from design to development, deployment, and operations and maintenance is critical to an organization realizing its full benefits. The guidelines documented in the Guidelines for Secure AI System Development aligns closely to software development life cycle practices defined in the NSCS’s Secure development and deployment guidance and the National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF). The 4 pillars that embody the Guidelines for Secure AI System Development offers guidance for AI providers of any systems whether newly created from the ground up or built on top of tools and services provided from Tool Vulnerability Threat Mobile Medical Cloud Technical ChatGPT ★★
RiskIQ.webp 2024-03-05 19:03:47 Rester en avance sur les acteurs de la menace à l'ère de l'IA
Staying ahead of threat actors in the age of AI
(lien direct)
## Snapshot Over the last year, the speed, scale, and sophistication of attacks has increased alongside the rapid development and adoption of AI. Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor and keep ahead of adversaries. At the same time, it is also important for us to understand how AI can be potentially misused in the hands of threat actors. In collaboration with OpenAI, today we are publishing research on emerging threats in the age of AI, focusing on identified activity associated with known threat actors, including prompt-injections, attempted misuse of large language models (LLM), and fraud. Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape. You can read OpenAI\'s blog on the research [here](https://openai.com/blog/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors). Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors\' usage of AI. However, Microsoft and our partners continue to study this landscape closely. The objective of Microsoft\'s partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT, upholding the highest standards of ethical application to protect the community from potential misuse. As part of this commitment, we have taken measures to disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models. In addition, we are also deeply committed to using generative AI to disrupt threat actors and leverage the power of new tools, including [Microsoft Copilot for Security](https://www.microsoft.com/security/business/ai-machine-learning/microsoft-security-copilot), to elevate defenders everywhere. ## Activity Overview ### **A principled approach to detecting and blocking threat actors** The progress of technology creates a demand for strong cybersecurity and safety measures. For example, the White House\'s Executive Order on AI requires rigorous safety testing and government supervision for AI systems that have major impacts on national and economic security or public health and safety. Our actions enhancing the safeguards of our AI models and partnering with our ecosystem on the safe creation, implementation, and use of these models align with the Executive Order\'s request for comprehensive AI safety and security standards. In line with Microsoft\'s leadership across AI and cybersecurity, today we are announcing principles shaping Microsoft\'s policy and actions mitigating the risks associated with the use of our AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates we track. These principles include: - **Identification and action against malicious threat actors\' use:** Upon detection of the use of any Microsoft AI application programming interfaces (APIs), services, or systems by an identified malicious threat actor, including nation-state APT or APM, or the cybercrime syndicates we track, Microsoft will take appropriate action to disrupt their activities, such as disabling the accounts used, terminating services, or limiting access to resources. - **Notification to other AI service providers:** When we detect a threat actor\'s use of another service provider\'s AI, AI APIs, services, and/or systems, Microsoft will promptly notify the service provider and share relevant data. This enables the service provider to independently verify our findings and take action in accordance with their own policies. - **Collaboration with other stakeholders:** Microsoft will collaborate with other stakeholders to regularly exchange information a Ransomware Malware Tool Vulnerability Threat Studies Medical Technical APT 28 ChatGPT APT 4 ★★
AlienVault.webp 2023-05-23 10:00:00 L'intersection de la télésanté, de l'IA et de la cybersécurité
The intersection of telehealth, AI, and Cybersecurity
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  Artificial intelligence is the hottest topic in tech today. AI algorithms are capable of breaking down massive amounts of data in the blink of an eye and have the potential to help us all lead healthier, happier lives. The power of machine learning means that AI-integrated telehealth services are on the rise, too. Almost every progressive provider today uses some amount of AI to track patients’ health data, schedule appointments, or automatically order medicine. However, AI-integrated telehealth may pose a cybersecurity risk. New technology is vulnerable to malicious actors and complex AI systems are largely reliant on a web of interconnected Internet of Things (IoT) devices. Before adopting AI, providers and patients must understand the unique opportunities and challenges that come with automation and algorithms. Improving the healthcare consumer journey Effective telehealth care is all about connecting patients with the right provider at the right time. Folks who need treatment can’t be delayed by bureaucratic practices or burdensome red tape. AI can improve the patient journey by automating monotonous tasks and improving the efficiency of customer identity and access management (CIAM) software. CIAM software that uses AI can utilize digital identity solutions to automate the registration and patient service process. This is important, as most patients say that they’d rather resolve their own questions and queries on their own before speaking to a service agent. Self-service features even allow patients to share important third-party data with telehealth systems via IoT tech like smartwatches. AI-integrated CIAM software is interoperable, too. This means that patients and providers can connect to the CIAM using omnichannel pathways. As a result, users can use data from multiple systems within the same telehealth digital ecosystem. However, this omnichannel approach to the healthcare consumer journey still needs to be HIPAA compliant and protect patient privacy. Medicine and diagnoses Misdiagnoses are more common than most people realize. In the US, 12 million people are misdiagnosed every year. Diagnoses may be even more tricky via telehealth, as doctors can’t read patients\' body language or physically inspect their symptoms. AI can improve the accuracy of diagnoses by leveraging machine learning algorithms during the decision-making process. These programs can be taught how to distinguish between different types of diseases and may point doctors in the right direction. Preliminary findings suggest that this can improve the accuracy of medical diagnoses to 99.5%. Automated programs can help patients maintain their medicine and re-order repeat prescriptions. This is particularly important for rural patients who are unable to visit the doctor\'s office and may have limited time to call in. As a result, telehealth portals that use AI to automate the process help providers close the rural-urban divide. Ethical considerations AI has clear benefits in telehealth. However, machine learning programs and automated platforms do put patient data at i Medical ChatGPT ChatGPT ★★
AlienVault.webp 2023-05-22 10:00:00 Partager les données de votre entreprise avec Chatgpt: à quel point est-elle risquée?
Sharing your business\\'s data with ChatGPT: How risky is it?
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  As a natural language processing model, ChatGPT - and other similar machine learning-based language models - is trained on huge amounts of textual data. Processing all this data, ChatGPT can produce written responses that sound like they come from a real human being. ChatGPT learns from the data it ingests. If this information includes your sensitive business data, then sharing it with ChatGPT could potentially be risky and lead to cybersecurity concerns. For example, what if you feed ChatGPT pre-earnings company financial information, company proprietary software codeor materials used for internal presentations without realizing that practically anybody could obtain that sensitive information just by asking ChatGPT about it? If you use your smartphone to engage with ChatGPT, then a smartphone security breach could be all it takes to access your ChatGPT query history. In light of these implications, let\'s discuss if - and how - ChatGPT stores its users\' input data, as well as potential risks you may face when sharing sensitive business data with ChatGPT. Does ChatGPT store users’ input data? The answer is complicated. While ChatGPT does not automatically add data from queries to models specifically to make this data available for others to query, any prompt does become visible to OpenAI, the organization behind the large language model. Although no membership inference attacks have yet been carried out against the large language learning models that drive ChatGPT, databases containing saved prompts as well as embedded learnings could be potentially compromised by a cybersecurity breach. OpenAI, the parent company that developed ChatGPT, is working with other companies to limit the general access that language learning models have to personal data and sensitive information. But the technology is still in its nascent developing stages - ChatGPT was only just released to the public in November of last year. By just two months into its public release, ChatGPT had been accessed by over 100 million users, making it the fastest-growing consumer app ever at record-breaking speeds. With such rapid growth and expansion, regulations have been slow to keep up. The user base is so broad that there are abundant security gaps and vulnerabilities throughout the model. Risks of sharing business data with ChatGPT In June 2021, researchers from Apple, Stanford University, Google, Harvard University, and others published a paper that revealed that GPT-2, a language learning model similar to ChatGPT, could accurately recall sensitive information from training documents. The report found that GPT-2 could call up information with specific personal identifiers, recreate exact sequences of text, and provide other sensitive information when prompted. These “training data extraction attacks” could present a growing threat to the security of researchers working on machine learning models, as hackers may be able to access machine learning researcher data and steal their protected intellectual property. One data security company called Cyberhaven has released reports of ChatGPT cybersecurity vulnerabilities it has recently prevented. According to the reports, Cyberhaven has identified and prevented insecure requ Tool Threat Medical ChatGPT ChatGPT ★★
AlienVault.webp 2023-05-01 10:00:00 Le rôle de l'IA dans les soins de santé: révolutionner l'industrie des soins de santé
The role of AI in healthcare: Revolutionizing the healthcare industry
(lien direct)
Le contenu de ce post est uniquement la responsabilité de l'auteur. & nbsp;AT & amp; t n'adopte ni n'approuve aucune des vues, des positions ou des informations fournies par l'auteur dans cet article. & Nbsp; Introduction L'intelligence artificielle (AI) est le mimétisme de certains aspects du comportement humain tels que le traitement du langage et la prise de décision en utilisant de grands modèles de langage (LLM) et le traitement du langage naturel (PNL). Les LLM sont un type spécifique d'IA qui analyse et génèrent un langage naturel à l'aide d'algorithmes d'apprentissage en profondeur.Les programmes d'IA sont faits pour penser comme les humains et imiter leurs actions sans être biaisés ou influencés par les émotions. LLMS fournit des systèmes pour traiter les grands ensembles de données et fournir une vue plus claire de la tâche à accomplir.L'IA peut être utilisée pour identifier les modèles, analyser les données et faire des prédictions basées sur les données qui leur sont fournies.Il peut être utilisé comme chatbots, assistants virtuels, traduction du langage et systèmes de traitement d'image. Certains principaux fournisseurs d'IA sont des chatppt par Open AI, Bard par Google, Bing AI par Microsoft et Watson AI par IBM.L'IA a le potentiel de révolutionner diverses industries, notamment le transport, la finance, les soins de santé et plus encore en prenant des décisions rapides, précises et éclairées avec l'aide de grands ensembles de données.Dans cet article, nous parlerons de certaines applications de l'IA dans les soins de santé. Applications de l'IA dans les soins de santé Il existe plusieurs applications de l'IA qui ont été mises en œuvre dans le secteur des soins de santé qui s'est avérée très réussie. Certains exemples sont: Imagerie médicale: Les algorithmes AI sont utilisés pour analyser des images médicales telles que les rayons X, les analyses d'IRM et les tomodensitométrie.Les algorithmes d'IA peuvent aider les radiologues à identifier les anomalies - aider les radiologues à faire des diagnostics plus précis.Par exemple, Google & rsquo; S AI Powered DeepMind a montré une précision similaire par rapport aux radiologues humains dans l'identification du cancer du sein. & nbsp; Médecine personnalisée: L'IA peut être utilisée pour générer des informations sur les biomarqueurs, les informations génétiques, les allergies et les évaluations psychologiques pour personnaliser le meilleur traitement des patients. . Ces données peuvent être utilisées pour prédire comment le patient réagira à divers cours de traitement pour une certaine condition.Cela peut minimiser les effets indésirables et réduire les coûts des options de traitement inutiles ou coûteuses.De même, il peut être utilisé pour traiter les troubles génétiques avec des plans de traitement personnalisés.Par exemple, Genomics profonde est une entreprise utilisant des systèmes d'IA pour développer des traitements personnalisés pour les troubles génétiques. Diagnostic de la maladie: Les systèmes d'IA peuvent être utilisés pour analyser les données des patients, y compris les antécédents médicaux et les résultats des tests pour établir un diagnostic plus précis et précoce des conditions mortelles comme le cancer.Par exemple, Pfizer a collaboré avec différents services basés sur l'IA pour diagnostiquer les maladies et IBM Watson utilise les PNL et les algorithmes d'apprentissage automatique pour l'oncologie dans l'élaboration de plans de traitement pour les patients atteints de cancer. Découverte de médicaments: L'IA peut être utilisée en R & amp; D pour la découverte de médicaments, ce qui rend le processus plus rapidement.L'IA peut supprimer certaines Prediction Medical ChatGPT ChatGPT ★★
knowbe4.webp 2023-03-14 13:00:00 CyberheistNews Vol 13 #11 [Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears (lien direct) CyberheistNews Vol 13 #11 CyberheistNews Vol 13 #11  |   March 14th, 2023 [Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears Robert Lemos at DARKReading just reported on a worrying trend. The title said it all, and the news is that more than 4% of employees have put sensitive corporate data into the large language model, raising concerns that its popularity may result in massive leaks of proprietary information. Yikes. I'm giving you a short extract of the story and the link to the whole article is below. "Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models, and that information could be retrieved at a later date if proper data security isn't in place for the service. "In a recent report, data security service Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential info, client data, source code, or regulated information to the LLM. "In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company. "And as more employees use ChatGPT and other AI-based services as productivity tools, the risk will grow, says Howard Ting, CEO of Cyberhaven. "'There was this big migration of data from on-prem to cloud, and the next big shift is going to be the migration of data into these generative apps," he says. "And how that plays out [remains to be seen] - I think, we're in pregame; we're not even in the first inning.'" Your employees need to be stepped through new-school security awareness training so that they understand the risks of doing things like this. Blog post with links:https://blog.knowbe4.com/employees-are-feeding-sensitive-biz-data-to-chatgpt-raising-security-fears [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blockl Ransomware Data Breach Spam Malware Threat Guideline Medical ChatGPT ChatGPT ★★
Last update at: 2024-05-09 02:07:51
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter