What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
silicon.fr.webp 2023-06-12 09:51:37 Bard, ChatGPT et leurs limites de raisonnement (lien direct) Google a doté Bard d'une capacité d'" exécution implicite de code ". Quels sont, en pratique, ses apports face à ChatGPT ? ChatGPT ChatGPT ★★
Pirate.webp 2023-06-10 16:09:06 Nouvelle technique d\'attaque basée sur ChatGPT (lien direct) >L’équipe de recherche Voyager18 de Vulcan Cyber a découvert une nouvelle technique d’attaque basée sur ChatGPT – appelée AI package hallucination – qui tire parti du phénomène des hallucinations parfois créées par les IA génératives. Il a déjà été observé que ChatGPT peut générer des URL, des références et même des bibliothèques de code et […] The post Nouvelle technique d'attaque basée sur ChatGPT first appeared on UnderNews. ChatGPT ChatGPT ★★★
globalsecuritymag.webp 2023-06-08 15:46:07 Nouvelle technique d\'attaque basée sur ChatGPT les Commentaires de Tanium (lien direct) Nouvelle technique d'attaque basée sur ChatGPT les Commentaires de Melissa Bischoping, Directrice, Endpoint Security Research chez Tanium r - Malwares ChatGPT ChatGPT ★★
SecureMac.webp 2023-06-07 20:58:01 Cybersécurité à l'ère de l'IA
Cybersecurity in the Age of AI
(lien direct)
> Conseils de cybersécurité AI et plus encore.Comment l'IA est-elle une cyber-menace?Que pouvez-vous faire pour vous protéger à l'ère de Chatgpt, de clonage vocal et de faux?
>AI cybersecurity tips and more. How is AI a cyber threat? What can you do to protect yourself in an age of ChatGPT, voice cloning, and deep fakes?
ChatGPT ★★
SecureList.webp 2023-06-07 08:00:34 It menace évolution Q1 2023
IT threat evolution Q1 2023
(lien direct)
Les activités récentes de Bluenoroff et de Mantis itinérantes, une nouvelle approche liée au conflit Russo-ukrainien, au chat de la menace, à la malvertiser par des moteurs de recherche, une campagne de vol de crypto-monnaie et un faux navigateur Tor
Recent BlueNoroff and Roaming Mantis activities, new APT related to the Russo-Ukrainian conflict, ChatGPT and threat intelligence, malvertising through search engines, cryptocurrency theft campaign and fake Tor browser
Threat ChatGPT ChatGPT ★★★
InfoSecurityMag.webp 2023-06-06 15:00:00 La nouvelle technique d'attaque de Chatgpt répartit les forfaits malveillants
New ChatGPT Attack Technique Spreads Malicious Packages
(lien direct)
L'équipe de recherche Vulcan Cyber \'s Voyager18 a appelé la technique "Hallucination du package AI"
Vulcan Cyber\'s Voyager18 research team called the technique "AI package hallucination"
ChatGPT ChatGPT ★★★
globalsecuritymag.webp 2023-06-06 14:01:25 Vulcan Cyber : Pouvez-vous faire confiance aux recommandations de ChatGPT ? (lien direct) Vulcan Cyber explique : Pouvez-vous faire confiance aux recommandations de ChatGPT ? ChatGPT peut offrir des solutions de codage, mais sa tendance à créer des hallucinations offre une opportunité aux attaquants. Voici ce que nous avons appris. - Points de Vue ChatGPT ChatGPT ★★★
DarkReading.webp 2023-06-06 12:00:00 Chatgpt Hallucinations ouvre les développeurs aux attaques de logiciels malveillants de la chaîne d'approvisionnement
ChatGPT Hallucinations Open Developers to Supply-Chain Malware Attacks
(lien direct)
Les attaquants pourraient exploiter une expérience d'interdiction en IA commune pour diffuser du code malveillant via des développeurs qui utilisent Chatgpt pour créer un logiciel.
Attackers could exploit a common AI experience-false recommendations-to spread malicious code via developers that use ChatGPT to create software.
Malware ChatGPT ChatGPT ★★
The_State_of_Security.webp 2023-06-06 02:59:40 Ce que font les API et ne faites pas
What APIs Do and Don\\'t Do
(lien direct)
Il est difficile d'être dans le domaine de la technologie et de ne pas entendre parler d'API ces jours-ci.Qu'il s'agisse du lancement de l'API Chatgpt ou des nouvelles d'une violation de données importante sur Twitter, les API ont leur temps sous les projecteurs.Pourtant, malgré leur ubiquité, beaucoup ont encore des questions sur les capacités (et les limitations) des API.À quoi servent les API?Que font-ils?Et que sont-ils incapables de faire à l'ère actuelle?Qu'est-ce qu'une API?Une API est une interface de programmation d'applications - un petit logiciel conçu pour la communication.Une API sert de messager entre un utilisateur final et un site Web ou une application ...
It\'s hard to be in the realm of technology and not hear about APIs these days. Whether it\'s the launch of the ChatGPT API or news of a significant data breach at Twitter, APIs are having their time in the spotlight. Yet, despite their ubiquity, many still have questions about APIs\' capabilities (and limitations). What are APIs for? What do they do? And what are they unable to do in the current era? What is an API? An API is an Application Programming Interface - a small piece of software designed for communication. An API serves as a messenger between an end user and a website or application...
Data Breach ChatGPT ChatGPT ★★
Netskope.webp 2023-06-05 19:16:46 Comprendre les risques des attaques d'injection rapides contre le chatgpt et d'autres modèles de langue
Understanding the Risks of Prompt Injection Attacks on ChatGPT and Other Language Models
(lien direct)
> Résumé Les modèles de grandes langues (LLM), tels que Chatgpt, ont gagné en popularité pour leur capacité à générer des conversations humaines et à aider les utilisateurs à diverses tâches.Cependant, avec leur utilisation croissante, les préoccupations concernant les vulnérabilités potentielles et les risques de sécurité ont émergé.Une telle préoccupation est des attaques d'injection rapides, où les acteurs malveillants tentent de manipuler le comportement de [& # 8230;]
>Summary Large language models (LLMs), such as ChatGPT, have gained significant popularity for their ability to generate human-like conversations and assist users with various tasks. However, with their increasing use, concerns about potential vulnerabilities and security risks have emerged. One such concern is prompt injection attacks, where malicious actors attempt to manipulate the behavior of […]
ChatGPT ChatGPT ★★★
CVE.webp 2023-06-02 16:15:09 CVE-2023-34094 (lien direct) ChuanhuChatGPT is a graphical user interface for ChatGPT and many large language models. A vulnerability in versions 20230526 and prior allows unauthorized access to the config.json file of the privately deployed ChuanghuChatGPT project, when authentication is not configured. The attacker can exploit this vulnerability to steal the API keys in the configuration file. The vulnerability has been fixed in commit bfac445. As a workaround, setting up access authentication can help mitigate the vulnerability.
ChuanhuChatGPT is a graphical user interface for ChatGPT and many large language models. A vulnerability in versions 20230526 and prior allows unauthorized access to the config.json file of the privately deployed ChuanghuChatGPT project, when authentication is not configured. The attacker can exploit this vulnerability to steal the API keys in the configuration file. The vulnerability has been fixed in commit bfac445. As a workaround, setting up access authentication can help mitigate the vulnerability.
Vulnerability ChatGPT ChatGPT
Chercheur.webp 2023-06-02 14:21:40 LLMS open source
Open-Source LLMs
(lien direct)
En février, Meta a publié son modèle grand langage: Llama.Contrairement à Openai et à son chatppt, Meta n'a pas simplement donné à la fenêtre de discussion au monde avec laquelle jouer.Au lieu de cela, il a publié le code dans la communauté open source et, peu de temps après, le modèle lui-même a été divulgué.Les chercheurs et les programmeurs ont immédiatement commencé à le modifier, à l'améliorer et à le faire faire des choses que personne d'autre prévoyait.Et leurs résultats ont été immédiats, innovants et une indication de la façon dont l'avenir de cette technologie va se dérouler.Les vitesses d'entraînement ont considérablement augmenté et la taille des modèles eux-mêmes s'est rétréci au point que vous pouvez les créer et les exécuter sur un ordinateur portable.Le monde de la recherche sur l'IA a radicalement changé ...
In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn’t just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out. Training speeds have hugely increased, and the size of the models themselves has shrunk to the point that you can create and run them on a laptop. The world of AI research has dramatically changed...
ChatGPT ★★
CVE.webp 2023-05-31 19:15:27 CVE-2023-33979 (lien direct) GPT_ACADEMIC fournit une interface graphique pour Chatgpt / GLM.Une vulnérabilité a été trouvée dans GPT_ACADEMIM 3.37 et antérieure.Ce problème affecte un traitement inconnu du gestionnaire de fichiers de configuration des composants.La manipulation du fichier d'argument conduit à la divulgation d'informations.Étant donné qu'aucun fichier sensible n'est configuré pour être interdit, les fichiers d'informations sensibles dans certains répertoires de travail peuvent être lus via l'itinéraire «/ fichier», conduisant à une fuite d'informations sensibles.Cela affecte les utilisateurs qui utilisent des configurations de fichiers via `config.py`,` config_private.py`, `dockerfile`.Un correctif est disponible chez commit 1dcc2873d2168ad2d3d70afcb453ac1695fbdf02.En tant que solution de contournement, on peut utiliser des variables d'environnement au lieu de fichiers `config * .py` pour configurer ce projet, ou utiliser une installation Docker-Compose pour configurer ce projet.
gpt_academic provides a graphical interface for ChatGPT/GLM. A vulnerability was found in gpt_academic 3.37 and prior. This issue affects some unknown processing of the component Configuration File Handler. The manipulation of the argument file leads to information disclosure. Since no sensitive files are configured to be off-limits, sensitive information files in some working directories can be read through the `/file` route, leading to sensitive information leakage. This affects users that uses file configurations via `config.py`, `config_private.py`, `Dockerfile`. A patch is available at commit 1dcc2873d2168ad2d3d70afcb453ac1695fbdf02. As a workaround, one may use environment variables instead of `config*.py` files to configure this project, or use docker-compose installation to configure this project.
Vulnerability ChatGPT
knowbe4.webp 2023-05-31 13:00:00 Cyberheistnews Vol 13 # 22 [Eye on Fraud] Un examen plus approfondi de la hausse massive de 72% des attaques de phishing financier
CyberheistNews Vol 13 #22 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks
(lien direct)
CyberheistNews Vol 13 #22 CyberheistNews Vol 13 #22  |   May 31st, 2023 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks With attackers knowing financial fraud-based phishing attacks are best suited for the one industry where the money is, this massive spike in attacks should both surprise you and not surprise you at all. When you want tires, where do you go? Right – to the tire store. Shoes? Yup – shoe store. The most money you can scam from a single attack? That\'s right – the financial services industry, at least according to cybersecurity vendor Armorblox\'s 2023 Email Security Threat Report. According to the report, the financial services industry as a target has increased by 72% over 2022 and was the single largest target of financial fraud attacks, representing 49% of all such attacks. When breaking down the specific types of financial fraud, it doesn\'t get any better for the financial industry: 51% of invoice fraud attacks targeted the financial services industry 42% were payroll fraud attacks 63% were payment fraud To make matters worse, nearly one-quarter (22%) of financial fraud attacks successfully bypassed native email security controls, according to Armorblox. That means one in five email-based attacks made it all the way to the Inbox. The next layer in your defense should be a user that\'s properly educated using security awareness training to easily identify financial fraud and other phishing-based threats, stopping them before they do actual damage. Blog post with links:https://blog.knowbe4.com/financial-fraud-phishing [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us Wednesday, June 7, @ 2:00 PM (ET), for a live demonstration of how KnowBe4 introduces a new-school approach to security awareness training and simulated phishing. Get a look at THREE NEW FEATURES and see how easy it is to train and phish your users. Ransomware Malware Hack Tool Threat Conference Uber ChatGPT ChatGPT Guam ★★
bleepingcomputer.webp 2023-05-30 15:01:01 ROMCOM MALWARE SPEAT via Google Ads pour Chatgpt, GIMP, plus
RomCom malware spread via Google Ads for ChatGPT, GIMP, more
(lien direct)
A new campaign distributing the RomCom backdoor malware is impersonating the websites of well-known or fictional software, tricking users into downloading and launching malicious installers. [...]
A new campaign distributing the RomCom backdoor malware is impersonating the websites of well-known or fictional software, tricking users into downloading and launching malicious installers. [...]
Malware ChatGPT ★★
securityintelligence.webp 2023-05-30 13:00:00 Now Social Engineering Attackers Have AI. Do You? (lien direct) > Tout le monde en technologie parle de Chatgpt, le chatbot basé sur l'IA d'Open IA qui écrit de la prose convaincante et du code utilisable.Le problème est que les cyber-attaquants malveillants peuvent utiliser des outils d'IA génératifs comme Chatgpt pour élaborer une prose convaincante et un code utilisable comme tout le monde.Comment cette nouvelle catégorie d'outils puissante affecte-t-elle la capacité [& # 8230;]
>Everybody in tech is talking about ChatGPT, the AI-based chatbot from Open AI that writes convincing prose and usable code. The trouble is malicious cyber attackers can use generative AI tools like ChatGPT to craft convincing prose and usable code just like everybody else. How does this powerful new category of tools affect the ability […]
ChatGPT ChatGPT ★★
securityintelligence.webp 2023-05-30 13:00:00 Maintenant, les attaquants d'ingénierie sociale ont l'IA.Est-ce que tu?
Now Social Engineering Attackers Have AI. Do You?
(lien direct)
> Tout le monde en technologie parle de Chatgpt, le chatbot basé sur l'IA d'Open IA qui écrit une prose convaincante et un code utilisable. & # 160;Le problème est que les cyber-attaquants malveillants peuvent utiliser des outils d'IA génératifs comme Chatgpt pour élaborer une prose convaincante et un code utilisable comme tout le monde. & # 160;Comment cette nouvelle catégorie d'outils puissante affecte-t-elle la capacité [& # 8230;]
>Everybody in tech is talking about ChatGPT, the AI-based chatbot from Open AI that writes convincing prose and usable code.  The trouble is malicious cyber attackers can use generative AI tools like ChatGPT to craft convincing prose and usable code just like everybody else.  How does this powerful new category of tools affect the ability […]
ChatGPT ChatGPT ★★
SocRadar.webp 2023-05-30 08:30:00 Chatgpt pour les analystes SOC
ChatGPT for SOC Analysts
(lien direct)
>ChatGPT, the language model developed by OpenAI, has taken the tech world by storm since its...
>ChatGPT, the language model developed by OpenAI, has taken the tech world by storm since its...
ChatGPT ChatGPT ★★★
DataSecurityBreach.webp 2023-05-30 08:03:35 Microsoft propose d\'isoler les applications pour une meilleure sécurité (lien direct) Alors que l'on entend surtout parler de ChatGPT dans les outils Microsoft, une autre fonctionnalité plus intéressante apparait dans Windows 11 : l'isolation des applications Win32. ChatGPT ChatGPT ★★
Korben.webp 2023-05-27 07:00:00 Chatterbox – ChatGPT à portée de main (lien direct) Si vous aimez Chat GPT, que vous ayez pris l’abonnement payant ou non, vous devez parfois être frustré par le site web qui n’est pas forcement super pratique à utiliser. C’est là qu’entre en scène Chatterbox pour macOS qui permet d’interagir facilement avec ChatGPT au travers d’une application native. L’idée … Suite ChatGPT ChatGPT ★★
mcafee.webp 2023-05-26 17:12:03 Tout le monde peut essayer Chatgpt pour Free-Don \\ 'ne tombe pas pour les applications sommaires qui vous facturent
Anyone Can Try ChatGPT for Free-Don\\'t Fall for Sketchy Apps That Charge You
(lien direct)
> N'importe qui peut essayer gratuitement Chatgpt.Pourtant, cela n'a pas empêché les escrocs d'essayer de l'encaisser.Une éruption cutanée ...
> Anyone can try ChatGPT for free. Yet that hasn\'t stopped scammers from trying to cash in on it.   A rash...
ChatGPT ChatGPT ★★
silicon.fr.webp 2023-05-26 14:59:50 Règlement IA : le CEO d\'OpenAI met en garde l\'Europe (lien direct) Est-ce une menace ? OpenAI, créateur de ChatGPT, cesserait ses opérations dans l'Union européenne s'il ne parveanit pas à se conformer au règlement IA en devenir (EU AI Act). ChatGPT ★★
Chercheur.webp 2023-05-25 11:05:43 Sur l'empoisonnement des LLM
On the Poisoning of LLMs
(lien direct)
Intéressant essai sur l'empoisonnement des LLM & # 8212; Chatgpt en particulier: Étant donné que nous connaissons l'empoisonnement des modèles depuis des années, et étant donné les fortes incitations, la foule de référencePendant des mois.Nous ne savons pas parce que Openai ne parle pas de leurs processus, de la façon dont ils valident les invites qu'ils utilisent pour la formation, de la façon dont ils vérifient leur ensemble de données de formation ou de la façon dont ils affinent Chatgpt.Leur secret signifie que nous ne savons pas si le chatppt a été géré en toute sécurité. Ils devront également mettre à jour leur ensemble de données de formation à un moment donné.Ils ne peuvent pas laisser leurs modèles coincés en 2021 pour toujours ...
Interesting essay on the poisoning of LLMs—ChatGPT in particular: Given that we’ve known about model poisoning for years, and given the strong incentives the black-hat SEO crowd has to manipulate results, it’s entirely possible that bad actors have been poisoning ChatGPT for months. We don’t know because OpenAI doesn’t talk about their processes, how they validate the prompts they use for training, how they vet their training data set, or how they fine-tune ChatGPT. Their secrecy means we don’t know if ChatGPT has been safely managed. They’ll also have to update their training data set at some point. They can’t leave their models stuck in 2021 forever...
ChatGPT ChatGPT ★★
silicon.fr.webp 2023-05-24 11:21:58 Microsoft Build 2023 : 5 annonces majeures autour de l\'IA (lien direct) Windows Copilot, Azure OpenAI Studio, Bing par défaut dans ChatGPT... font partie des offres par le biais desquelles Microsoft ambitionne de mettre l'intelligence artificielle à la portée de tous les développeurs de son écosystème. ChatGPT ★★
silicon.fr.webp 2023-05-24 09:25:28 Pourquoi ChatGPT présente un risque de sécurité pour les organisations (même si elles ne l\'utilisent pas) (lien direct) ChatGPT n'est peut-être pas utilisé par toutes les entreprises et est peut-être même interdit dans de nombreuses organisations. Mais cela ne signifie pas que ces dernières ne sont pas exposées aux risques de sécurité qu'il contient. Cet article examine les raisons pour lesquelles ChatGPT devrait faire partie de l'état des lieux des menaces des organisations. ChatGPT ChatGPT ★★
Netskope.webp 2023-05-23 14:00:00 Construisez une culture et une gouvernance pour activer en toute sécurité le chatppt et les applications d'inte
Build a Culture and Governance to Securely Enable ChatGPT and Generative AI Applications-Or Get Left Behind
(lien direct)
> Co-auteur par James Robinson et Jason Clark à peine Chatgpt et le sujet de l'intelligence artificielle générative (IA) se réunissent que chaque leader de la technologie d'entreprise d'entreprise a commencé à poser la même question.Est-ce sûr?Chez NetSkope, notre réponse est fournis par oui, nous faisons toutes les bonnes choses avec la protection sensible des données et le responsable [& # 8230;]
>Co-authored by James Robinson and Jason Clark No sooner did ChatGPT and the topic of generative artificial intelligence (AI) go mainstream than every enterprise business technology leader started asking the same question. Is it safe? At Netskope, our answer is yes-provided we are doing all the right things with sensitive data protection and the responsible […]
ChatGPT ChatGPT ★★
knowbe4.webp 2023-05-23 13:00:00 Cyberheistnews Vol 13 # 21 [Double Trouble] 78% des victimes de ransomwares sont confrontées à plusieurs extensions en tendance effrayante
CyberheistNews Vol 13 #21 [Double Trouble] 78% of Ransomware Victims Face Multiple Extortions in Scary Trend
(lien direct)
CyberheistNews Vol 13 #21 CyberheistNews Vol 13 #21  |   May 23rd, 2023 [Double Trouble] 78% of Ransomware Victims Face Multiple Extortions in Scary Trend New data sheds light on how likely your organization will succumb to a ransomware attack, whether you can recover your data, and what\'s inhibiting a proper security posture. You have a solid grasp on what your organization\'s cybersecurity stance does and does not include. But is it enough to stop today\'s ransomware attacks? CyberEdge\'s 2023 Cyberthreat Defense Report provides some insight into just how prominent ransomware attacks are and what\'s keeping orgs from stopping them. According to the report, in 2023: 7% of organizations were victims of a ransomware attack 7% of those paid a ransom 73% were able to recover data Only 21.6% experienced solely the encryption of data and no other form of extortion It\'s this last data point that interests me. Nearly 78% of victim organizations experienced one or more additional forms of extortion. CyberEdge mentions threatening to publicly release data, notifying customers or media, and committing a DDoS attack as examples of additional threats mentioned by respondents. IT decision makers were asked to rate on a scale of 1-5 (5 being the highest) what were the top inhibitors of establishing and maintaining an adequate defense. The top inhibitor (with an average rank of 3.66) was a lack of skilled personnel – we\'ve long known the cybersecurity industry is lacking a proper pool of qualified talent. In second place, with an average ranking of 3.63, is low security awareness among employees – something only addressed by creating a strong security culture with new-school security awareness training at the center of it all. Blog post with links:https://blog.knowbe4.com/ransomware-victim-threats [Free Tool] Who Will Fall Victim to QR Code Phishing Attacks? Bad actors have a new way to launch phishing attacks to your users: weaponized QR codes. QR code phishing is especially dangerous because there is no URL to check and messages bypass traditional email filters. With the increased popularity of QR codes, users are more at Ransomware Hack Tool Vulnerability Threat Prediction ChatGPT ★★
SocRadar.webp 2023-05-23 11:47:24 Chatgpt pour les professionnels du CTI
ChatGPT for CTI Professionals
(lien direct)
> En 1950, Alan Turing, le père de l'informatique moderne, a demandé, & # 8220; les machines peuvent-elles penser? & # 8221;Sur le ...
>In 1950, Alan Turing, the father of modern computing, asked, “Can machines think?” Over the...
ChatGPT ChatGPT ★★
AlienVault.webp 2023-05-23 10:00:00 L'intersection de la télésanté, de l'IA et de la cybersécurité
The intersection of telehealth, AI, and Cybersecurity
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  Artificial intelligence is the hottest topic in tech today. AI algorithms are capable of breaking down massive amounts of data in the blink of an eye and have the potential to help us all lead healthier, happier lives. The power of machine learning means that AI-integrated telehealth services are on the rise, too. Almost every progressive provider today uses some amount of AI to track patients’ health data, schedule appointments, or automatically order medicine. However, AI-integrated telehealth may pose a cybersecurity risk. New technology is vulnerable to malicious actors and complex AI systems are largely reliant on a web of interconnected Internet of Things (IoT) devices. Before adopting AI, providers and patients must understand the unique opportunities and challenges that come with automation and algorithms. Improving the healthcare consumer journey Effective telehealth care is all about connecting patients with the right provider at the right time. Folks who need treatment can’t be delayed by bureaucratic practices or burdensome red tape. AI can improve the patient journey by automating monotonous tasks and improving the efficiency of customer identity and access management (CIAM) software. CIAM software that uses AI can utilize digital identity solutions to automate the registration and patient service process. This is important, as most patients say that they’d rather resolve their own questions and queries on their own before speaking to a service agent. Self-service features even allow patients to share important third-party data with telehealth systems via IoT tech like smartwatches. AI-integrated CIAM software is interoperable, too. This means that patients and providers can connect to the CIAM using omnichannel pathways. As a result, users can use data from multiple systems within the same telehealth digital ecosystem. However, this omnichannel approach to the healthcare consumer journey still needs to be HIPAA compliant and protect patient privacy. Medicine and diagnoses Misdiagnoses are more common than most people realize. In the US, 12 million people are misdiagnosed every year. Diagnoses may be even more tricky via telehealth, as doctors can’t read patients\' body language or physically inspect their symptoms. AI can improve the accuracy of diagnoses by leveraging machine learning algorithms during the decision-making process. These programs can be taught how to distinguish between different types of diseases and may point doctors in the right direction. Preliminary findings suggest that this can improve the accuracy of medical diagnoses to 99.5%. Automated programs can help patients maintain their medicine and re-order repeat prescriptions. This is particularly important for rural patients who are unable to visit the doctor\'s office and may have limited time to call in. As a result, telehealth portals that use AI to automate the process help providers close the rural-urban divide. Ethical considerations AI has clear benefits in telehealth. However, machine learning programs and automated platforms do put patient data at i Medical ChatGPT ChatGPT ★★
silicon.fr.webp 2023-05-22 10:33:06 Apple restreint l\'usage de ChatGPT pour ses équipes (lien direct) Apple restreint l'utilisation de l'IA générative ChatGPT pour certains de ses employés. La firme craint la diffusion de données sensibles. ChatGPT ChatGPT ★★★
AlienVault.webp 2023-05-22 10:00:00 Partager les données de votre entreprise avec Chatgpt: à quel point est-elle risquée?
Sharing your business\\'s data with ChatGPT: How risky is it?
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  As a natural language processing model, ChatGPT - and other similar machine learning-based language models - is trained on huge amounts of textual data. Processing all this data, ChatGPT can produce written responses that sound like they come from a real human being. ChatGPT learns from the data it ingests. If this information includes your sensitive business data, then sharing it with ChatGPT could potentially be risky and lead to cybersecurity concerns. For example, what if you feed ChatGPT pre-earnings company financial information, company proprietary software codeor materials used for internal presentations without realizing that practically anybody could obtain that sensitive information just by asking ChatGPT about it? If you use your smartphone to engage with ChatGPT, then a smartphone security breach could be all it takes to access your ChatGPT query history. In light of these implications, let\'s discuss if - and how - ChatGPT stores its users\' input data, as well as potential risks you may face when sharing sensitive business data with ChatGPT. Does ChatGPT store users’ input data? The answer is complicated. While ChatGPT does not automatically add data from queries to models specifically to make this data available for others to query, any prompt does become visible to OpenAI, the organization behind the large language model. Although no membership inference attacks have yet been carried out against the large language learning models that drive ChatGPT, databases containing saved prompts as well as embedded learnings could be potentially compromised by a cybersecurity breach. OpenAI, the parent company that developed ChatGPT, is working with other companies to limit the general access that language learning models have to personal data and sensitive information. But the technology is still in its nascent developing stages - ChatGPT was only just released to the public in November of last year. By just two months into its public release, ChatGPT had been accessed by over 100 million users, making it the fastest-growing consumer app ever at record-breaking speeds. With such rapid growth and expansion, regulations have been slow to keep up. The user base is so broad that there are abundant security gaps and vulnerabilities throughout the model. Risks of sharing business data with ChatGPT In June 2021, researchers from Apple, Stanford University, Google, Harvard University, and others published a paper that revealed that GPT-2, a language learning model similar to ChatGPT, could accurately recall sensitive information from training documents. The report found that GPT-2 could call up information with specific personal identifiers, recreate exact sequences of text, and provide other sensitive information when prompted. These “training data extraction attacks” could present a growing threat to the security of researchers working on machine learning models, as hackers may be able to access machine learning researcher data and steal their protected intellectual property. One data security company called Cyberhaven has released reports of ChatGPT cybersecurity vulnerabilities it has recently prevented. According to the reports, Cyberhaven has identified and prevented insecure requ Tool Threat Medical ChatGPT ChatGPT ★★
globalsecuritymag.webp 2023-05-22 07:53:52 Sophos : De fausses applications ChatGPT escroquent des utilisateurs de plusieurs milliers de dollars (lien direct) De fausses applications ChatGPT escroquent des utilisateurs de plusieurs milliers de dollars, selon un rapport de Sophos Ces applications – appelées des fleecewares – exploitent des failles dans la politique des app stores ainsi que des tactiques coercitives pour surfacturer l'utilisation d'assistants IA - Malwares ChatGPT ChatGPT ★★
The_State_of_Security.webp 2023-05-22 06:58:09 How ChatGPT is Changing Our World (lien direct) Le modèle de langue basé sur l'intelligence artificielle (IA), Chatgpt, a attiré beaucoup d'attention récemment, et à juste titre.C'est sans doute l'innovation technique la plus populaire la plus populaire depuis l'introduction des haut-parleurs intelligents désormais omniprésents dans nos maisons qui nous permettent d'appeler une question et de recevoir une réponse instantanée.Mais qu'est-ce que c'est et pourquoi est-il pertinent pour la cybersécurité et la protection des données?Qu'est-ce que le chatppt?Le "GPT" dans Chatgpt signifie "Generative Pre-foraged Transformer".Il s'agit d'un modèle de traitement du langage naturel (NLP) à la pointe de la technologie développé par OpenAI, basé sur un apprentissage en profondeur ...
The Artificial intelligence (AI) based language model, ChatGPT, has gained a lot of attention recently, and rightfully so. It is arguably the most widely popular technical innovation since the introduction of the now ubiquitous smart speakers in our homes that enable us to call out a question and receive an instant answer. But what is it, and why is it relevant to cyber security and data protection? What is ChatGPT? The "GPT" in ChatGPT stands for "Generative Pre-Trained Transformer". It is a state-of-the-art Natural Language Processing (NLP) model developed by OpenAI, based on a deep learning...
ChatGPT ChatGPT ★★★
The_State_of_Security.webp 2023-05-22 06:58:09 Comment Chatgpt change notre monde
How ChatGPT is Changing Our World
(lien direct)
Le modèle de langue basé sur l'intelligence artificielle (IA), Chatgpt, a attiré beaucoup d'attention récemment, et à juste titre.C'est sans doute l'innovation technique la plus populaire la plus populaire depuis l'introduction des haut-parleurs intelligents désormais omniprésents dans nos maisons qui nous permettent d'appeler une question et de recevoir une réponse instantanée.Mais qu'est-ce que c'est et pourquoi est-il pertinent pour la cybersécurité et la protection des données?Qu'est-ce que le chatppt?Le "GPT" dans Chatgpt signifie "Generative Pre-foraged Transformer".Il s'agit d'un modèle de traitement du langage naturel (NLP) à la pointe de la technologie développé par OpenAI, basé sur un apprentissage en profondeur ...
The Artificial intelligence (AI) based language model, ChatGPT, has gained a lot of attention recently, and rightfully so. It is arguably the most widely popular technical innovation since the introduction of the now ubiquitous smart speakers in our homes that enable us to call out a question and receive an instant answer. But what is it, and why is it relevant to cyber security and data protection? What is ChatGPT? The "GPT" in ChatGPT stands for "Generative Pre-Trained Transformer". It is a state-of-the-art Natural Language Processing (NLP) model developed by OpenAI, based on a deep learning...
ChatGPT ChatGPT ★★★
ArsTechnica.webp 2023-05-19 16:16:03 Fearing leaks, Apple restricts its employees from using ChatGPT and AI tools (lien direct) Cloud AI tools could leak confidential Apple company data; Apple works on its own LLM.
Cloud AI tools could leak confidential Apple company data; Apple works on its own LLM.
Cloud ChatGPT ChatGPT ★★
The_Hackers_News.webp 2023-05-19 12:23:00 Vous recherchez des outils d'IA?Attention aux sites voyous distribuant des logiciels malveillants Redline
Searching for AI Tools? Watch Out for Rogue Sites Distributing RedLine Malware
(lien direct)
Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver RedLine Stealer malware. "Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord)," eSentire
Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver RedLine Stealer malware. "Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord)," eSentire
Malware ChatGPT ChatGPT ★★
DarkReading.webp 2023-05-18 15:49:00 Ox Security lance Ox-GPT, la première intégration de Chatgpt AppSec \\
OX Security Launches OX-GPT, AppSec\\'s First ChatGPT Integration
(lien direct)
Correction des recommandations personnalisées et les correctifs de code CUT et coller réduisent considérablement les temps de correction.
Customized fix recommendations and cut and paste code fixes dramatically reduce remediation times.
ChatGPT ChatGPT ★★
DarkReading.webp 2023-05-18 14:00:00 3 façons dont les pirates utilisent le chatppt pour provoquer des maux de tête de sécurité
3 Ways Hackers Use ChatGPT to Cause Security Headaches
(lien direct)
À mesure que l'adoption de Chatgpt se développe, l'industrie doit procéder à la prudence.Voici pourquoi.
As ChatGPT adoption grows, the industry needs to proceed with caution. Here\'s why.
ChatGPT ChatGPT ★★
InfoSecurityMag.webp 2023-05-18 10:30:00 ChatGPT Leveraged to Enhance Software Supply Chain Security (lien direct) Ox-GPT est conçu pour aider à résoudre rapidement les vulnérabilités de sécurité pendant le développement de logiciels
OX-GPT is designed to help quickly remediate security vulnerabilities during software development
ChatGPT ChatGPT ★★
InfoSecurityMag.webp 2023-05-17 16:00:00 Batloader usurpère Chatgpt et MidJourney en cyber-attaques
BatLoader Impersonates ChatGPT and Midjourney in Cyber-Attacks
(lien direct)
ESENTIRE a recommandé de sensibiliser les logiciels malveillants à se déguiser en tant qu'applications légitimes
eSentire recommended raising awareness of malware masquerading as legitimate applications
Malware ChatGPT ChatGPT ★★
silicon.fr.webp 2023-05-17 15:53:54 OpenAI : réguler l\'IA oui, mais… (lien direct) Sam Altman, CEO d'OpenAI, éditeur de ChatGPT, et d'autres plaident devant le Congrès de Etats-Unis pour une régulation du secteur. Jeu d'influence ? ChatGPT ★★
silicon.fr.webp 2023-05-17 15:38:00 ChatGPT : concurrence cherche clients (lien direct) Zoom annonce son intention d'intégrer le chatbot Claude... et passe à la même occasion sous silence sa jonction avec ChatGPT. ChatGPT ChatGPT ★★
Korben.webp 2023-05-16 07:00:00 N\'ayez plus peur de vous lancer dans l\'apprentissage du Python avec \'Code With Mu\' (lien direct) Python c’est quand même un langage super cool et simple à prendre en main. Même les enfants peuvent s’y mettre, et ce serait dommage de vous en priver surtout que maintenant avec ChatGPT et services d’IA similaires, on peut arriver à ses fins beaucoup plus facilement et apprendre à coder … Suite ChatGPT ChatGPT ★★
PaloAlto.webp 2023-05-12 17:00:13 Sécuriser et gérer le trafic Chatgpt avec Palo Alto Networks App-ID
Securing and Managing ChatGPT Traffic with Palo Alto Networks App-ID
(lien direct)
> Le dilemme de la convivialité et de la sécurité des outils d'IA devient une préoccupation, mais la gestion du trafic Chatgpt avec l'application de Palo Alto Networks est possible.
>The dilemma of usability and security of AI tools is becoming a concern, but managing ChatGPT traffic with Palo Alto Networks APP-ID is possible.
ChatGPT ChatGPT ★★
Netskope.webp 2023-05-12 00:13:20 Garanties de protection des données modernes pour le chatppt et autres applications génératrices d'IA
Modern Data Protection Safeguards for ChatGPT and Other Generative AI Applications
(lien direct)
> Ces derniers temps, la montée de l'intelligence artificielle (IA) a révolutionné la façon dont de plus en plus les utilisateurs d'entreprise interagissent avec leur travail quotidien.Des applications SAAS basées sur l'IA comme CHATGPT ont offert d'innombrables opportunités aux organisations et à leurs employés pour améliorer la productivité des entreprises, faciliter de nombreuses tâches, améliorer les services et aider à rationaliser les opérations.Équipes et individus [& # 8230;]
>In recent times, the rise of artificial intelligence (AI) has revolutionized the way more and more corporate users interact with their daily work. Generative AI-based SaaS applications like ChatGPT have offered countless opportunities to organizations and their employees to improve business productivity, ease numerous tasks, enhance services, and assist in streamlining operations. Teams and individuals […]
Cloud ChatGPT ChatGPT ★★
Trend.webp 2023-05-12 00:00:00 Annonces d'outils d'IA malveillants utilisés pour livrer le voleur Redline
Malicious AI Tool Ads Used to Deliver Redline Stealer
(lien direct)
Nous avons observé des campagnes publicitaires malveillantes dans le moteur de recherche de Google \\ avec des thèmes liés à des outils d'IA tels que MidJourney et Chatgpt.
We\'ve been observing malicious advertisement campaigns in Google\'s search engine with themes that are related to AI tools such as Midjourney and ChatGPT.
Tool ChatGPT ★★
Korben.webp 2023-05-10 07:00:00 Transformez votre Raspberry Pi en assistant vocal avec VoiceGPT (lien direct) On est dimanche, vous êtes en pleine mission bricolage comme tous les weekends, un tournevis à la main à méditer sur le sens à donner à votre vis, quand soudain, vous vous posez une question sur le type de cheville à utiliser ! Il vous faut une réponse rapide, mais … Suite ChatGPT ★★
knowbe4.webp 2023-05-09 13:00:00 Cyberheistnews Vol 13 # 19 [Watch Your Back] Nouvelle fausse erreur de mise à jour Chrome Attaque cible vos utilisateurs
CyberheistNews Vol 13 #19 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users
(lien direct)
CyberheistNews Vol 13 #19 CyberheistNews Vol 13 #19  |   May 9th, 2023 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users Compromised websites (legitimate sites that have been successfully compromised to support social engineering) are serving visitors fake Google Chrome update error messages. "Google Chrome users who use the browser regularly should be wary of a new attack campaign that distributes malware by posing as a Google Chrome update error message," Trend Micro warns. "The attack campaign has been operational since February 2023 and has a large impact area." The message displayed reads, "UPDATE EXCEPTION. An error occurred in Chrome automatic update. Please install the update package manually later, or wait for the next automatic update." A link is provided at the bottom of the bogus error message that takes the user to what\'s misrepresented as a link that will support a Chrome manual update. In fact the link will download a ZIP file that contains an EXE file. The payload is a cryptojacking Monero miner. A cryptojacker is bad enough since it will drain power and degrade device performance. This one also carries the potential for compromising sensitive information, particularly credentials, and serving as staging for further attacks. This campaign may be more effective for its routine, innocent look. There are no spectacular threats, no promises of instant wealth, just a notice about a failed update. Users can become desensitized to the potential risks bogus messages concerning IT issues carry with them. Informed users are the last line of defense against attacks like these. New school security awareness training can help any organization sustain that line of defense and create a strong security culture. Blog post with links:https://blog.knowbe4.com/fake-chrome-update-error-messages A Master Class on IT Security: Roger A. Grimes Teaches You Phishing Mitigation Phishing attacks have come a long way from the spray-and-pray emails of just a few decades ago. Now they\'re more targeted, more cunning and more dangerous. And this enormous security gap leaves you open to business email compromise, session hijacking, ransomware and more. Join Roger A. Grimes, KnowBe4\'s Data-Driven Defense Evangelist, Ransomware Data Breach Spam Malware Tool Threat Prediction NotPetya NotPetya APT 28 ChatGPT ChatGPT ★★
AlienVault.webp 2023-05-08 10:00:00 Empêcher des attaques de phishing sophistiquées destinées aux employés
Preventing sophisticated phishing attacks aimed at employees
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  As technology advances, phishing attempts are becoming more sophisticated. It can be challenging for employees to recognize an email is malicious when it looks normal, so it’s up to their company to properly train workers in prevention and detection. Phishing attacks are becoming more sophisticated Misspellings and poorly formatted text used to be the leading indicators of an email scam, but they’re getting more sophisticated. Today, hackers can spoof email addresses and bots sound like humans. It’s becoming challenging for employees to tell if their emails are real or fake, which puts the company at risk of data breaches. In March 2023, an artificial intelligence chatbot called GPT-4 received an update that lets users give specific instructions about styles and tasks. Attackers can use it to pose as employees and send convincing messages since it sounds intelligent and has general knowledge of any industry. Since classic warning signs of phishing attacks aren’t applicable anymore, companies should train all employees on the new, sophisticated methods. As phishing attacks change, so should businesses. Identify the signs Your company can take preventive action to secure its employees against attacks. You need to make it difficult for hackers to reach them, and your company must train them on warning signs. While blocking spam senders and reinforcing security systems is up to you, they must know how to identify and report themselves. You can prevent data breaches if employees know what to watch out for: Misspellings: While it’s becoming more common for phishing emails to have the correct spelling, employees still need to look for mistakes. For example, they could look for industry-specific language because everyone in their field should know how to spell those words. Irrelevant senders: Workers can identify phishing — even when the email is spoofed to appear as someone they know — by asking themselves if it is relevant. They should flag the email as a potential attack if the sender doesn’t usually reach out to them or is someone in an unrelated department. Attachments: Hackers attempt to install malware through links or downloads. Ensure every employee knows they shouldn\'t click on them. Odd requests: A sophisticated phishing attack has relevant messages and proper language, but it is somewhat vague because it goes to multiple employees at once. For example, they could recognize it if it’s asking them to do something unrelated to their role. It may be harder for people to detect warning signs as attacks evolve, but you can prepare them for those situations as well as possible. It’s unlikely hackers have access to their specific duties or the inner workings of your company, so you must capitalize on those details. Sophisticated attacks will sound intelligent and possibly align with their general duties, so everyone must constantly be aware. Training will help employees identify signs, but you need to take more preventive action to ensure you’re covered. Take preventive action Basic security measures — like regularly updating passwords and running antivirus software — are fundamental to protecting your company. For example, everyone should change their passwords once every three months at minimum to ensur Spam Malware ChatGPT ★★
Netskope.webp 2023-05-05 19:58:52 Consolidation, flexibilité, chatppt et autres plats clés de NetSkopers à la conférence RSA 2023
Consolidation, Flexibility, ChatGPT, & Other Key Takeaways from Netskopers at RSA Conference 2023
(lien direct)
À la conférence RSA 2023, un certain nombre de netskopers de toute l'organisation qui ont assisté à l'événement à San Francisco ont partagé des commentaires sur les tendances, les sujets et les plats à emporter de la conférence de cette année.Parce qu'il y a tellement d'activités qui se produisent pendant la conférence, que ce soit des keynotes, des présentations d'experts, un flocage tentaculaire couvrant plusieurs bâtiments avec des centaines de [& # 8230;]
At RSA Conference 2023, a number of Netskopers from across the organization who attended the event in San Francisco shared commentary on the trends, topics, and takeaways from this year\'s conference. Because there are so many activities happening during the conference, whether that\'s keynotes, expert presentations, a sprawling showfloor spanning several buildings with hundreds of […]
Conference ChatGPT ★★
Last update at: 2024-05-08 06:08:08
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter