What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
globalsecuritymag.webp 2024-01-23 10:21:41 Cybersécurité : 5 risques à suivre en 2024, selon Hiscox (lien direct) Cybersécurité : 5 risques à suivre en 2024, selon Hiscox. le Rapport Hiscox 2023 sur la gestion des cyber-risques, tandis qu'avec la croissance de modèles tels que ChatGPT, facilitant la rédaction d'emails de phishing convaincants, les employés se sont plus que jamais retrouvés en première ligne - Points de Vue Threat Prediction ChatGPT ★★★★
AlienVault.webp 2023-12-27 11:00:00 Cybersécurité post-pandémique: leçons de la crise mondiale de la santé
Post-pandemic Cybersecurity: Lessons from the global health crisis
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  Beyond ‘just’ causing mayhem in the outside world, the pandemic also led to a serious and worrying rise in cybersecurity breaches. In 2020 and 2021, businesses saw a whopping 50% increase in the amount of attempted breaches. The transition to remote work, outdated healthcare organization technology, the adoption of AI bots in the workplace, and the presence of general uncertainty and fear led to new opportunities for bad actors seeking to exploit and benefit from this global health crisis. In this article, we will take a look at how all of this impacts the state of cybersecurity in the current post-pandemic era, and what conclusions can be drawn. New world, new vulnerabilities Worldwide lockdowns led to a rise in remote work opportunities, which was a necessary adjustment to allow employees to continue to earn a living. However, the sudden shift to the work-from-home format also caused a number of challenges and confusion for businesses and remote employees alike. The average person didn’t have the IT department a couple of feet away, so they were forced to fend for themselves. Whether it was deciding whether to use a VPN or not, was that email really a phishing one, or even just plain software updates, everybody had their hands full. With employers busy with training programs, threat actors began intensifying their ransomware-related efforts, resulting in a plethora of high-profile incidents in the last couple of years. A double-edged digital sword If the pandemic did one thing, it’s making us more reliant on both software and digital currencies. You already know where we’re going with this—it’s fertile ground for cybercrime. Everyone from the Costa Rican government to Nvidia got hit. With the dominance of Bitcoin as a payment method in ransoming, tracking down perpetrators is infinitely more difficult than it used to be. The old adage holds more true than ever - an ounce of prevention is worth a pound of cure. To make matters worse, amongst all that chaos, organizations also had to pivot away from vulnerable, mainstream software solutions. Even if it’s just choosing a new image editor or integrating a PDF SDK, it’s an increasing burden for businesses that are already trying to modernize or simply maintain. Actors strike where we’re most vulnerable Healthcare organizations became more important than ever during the global coronavirus pandemic. But this time also saw unprecedented amounts of cybersecurity incidents take place as bad actors exploited outdated cybersecurity measures. The influx of sudden need caused many overburdened healthcare organizations to lose track of key cybersecurity protocols that could help shore up gaps in the existing protective measures. The United States healthcare industry saw a 25% spike in successful data breaches during the pandemic, which resulted in millions of dollars of damages and the loss of privacy for thousands of patients whose data was compromis Data Breach Vulnerability Threat Studies Prediction ChatGPT ★★
Watchguard.webp 2023-11-29 00:00:00 Les prédictions cyber 2024 du Threat Lab WatchGuard (lien direct) Paris, le 29 novembre 2023 – WatchGuard® Technologies, l'un des leaders mondiaux de la cybersécurité unifiée publie ses prévisions pour 2024 en matière de cybersécurité. Le rapport couvre les attaques et les tendances en matière de sécurité de l'information qui, selon l'équipe de recherche du WatchGuard Threat Lab, émergeront en 2024, telles que : la manipulation des modèles linguistiques basés sur l'IA (les LLM ou Large Language Model qui ont donné naissance à des outils tels que ChatGPT ) ; les " Vishers " qui étendent leurs opérations malveillantes grâce aux chatbots vocaux basés sur l'IA ; les piratages de casques VR/MR modernes.    Corey Nachreiner, Chief Security Officer chez WatchGuard Technologies explique : " Chaque nouvelle tendance technologique ouvre de nouveaux vecteurs d'attaque pour les cybercriminels. En 2024, les menaces émergentes ciblant les entreprises et les particuliers seront encore plus intenses, complexes et difficiles à gérer. Face à la pénurie de profils qualifiés en cybersécurité, le besoin de fournisseurs de services managés (MSP), de sécurité unifiée et de plateformes automatisées pour renforcer la cybersécurité et protéger les entreprises contre un éventail de menaces en constante évolution n'a jamais été aussi grand ".    Voici un résumé des principales prévisions de l'équipe du WatchGuard Threat Lab en matière de cybersécurité pour 2024 : L'ingénierie de pointe permettra de manipuler les grands modèles de langages (LLM) : Les entreprises et les particuliers ont recours aux LLM pour améliorer leur efficacité opérationnelle. Or, les acteurs de la menace apprennent à exploiter les LLM à leurs propres fins malveillantes. En 2024, le WatchGuard Threat Lab prévoit qu'un ingénieur de requêtes avisé, qu'il s'agisse d'un attaquant criminel ou d'un chercheur, pourra déchiffrer le code et manipuler un LLM pour qu'il divulgue des données privées.  Les ventes d'outils d'hameçonnage ciblé basés sur l'IA vont exploser sur le dark web : Les cybercriminels peuvent d'ores et déjà acheter sur le marché noir des outils qui envoient des emails non sollicités, rédigent automatiquement des textes convaincants et épluchent Internet et les médias sociaux à la recherche d'informations et de connaissances relatives à une cible particulière. Toutefois, bon nombre de ces outils sont encore manuels et les attaquants doivent cibler un seul utilisateur ou groupe de personnes à la fois. Les tâches clairement formatées de ce type se prêtent parfaitement à l'automatisation par le biais de l'intelligence artificielle et de l'apprentissage automatique. Il est donc probable que les outils alimentés par l'IA deviendront des best-sellers sur le dark web en 2024.  L'hameçonnage vocal (vishing) basé sur l'IA aura le vent en poupe en 2024 : Bien que la voix sur IP (VoIP) et la technologie de l'automatisation facilitent la composition en masse de milliers de numéros, une fois qu'une victime potentielle se présente, un escroc humain est toujours nécessaire pour l'attirer dans ses filets. Ce système limite l'ampleur des opérations de vishing. Mais en 2024, la situation pourrait changer. WatchGuard prévoit que la combinaison de deepfake audio convaincants et de LLM capables de mener des conversations avec des victimes peu méfiantes augmentera considérablement l'ampleur et le volume des appels de vishing. Qui plus est, ces appels pourraient même ne p Tool Threat Prediction ChatGPT ChatGPT ★★★
ProofPoint.webp 2023-11-28 23:05:04 Prédictions 2024 de Proofpoint \\: Brace for Impact
Proofpoint\\'s 2024 Predictions: Brace for Impact
(lien direct)
In the ever-evolving landscape of cybersecurity, defenders find themselves navigating yet another challenging year. Threat actors persistently refine their tactics, techniques, and procedures (TTPs), showcasing adaptability and the rapid iteration of novel and complex attack chains. At the heart of this evolution lies a crucial shift: threat actors now prioritize identity over technology. While the specifics of TTPs and the targeted technology may change, one constant remains: humans and their identities are the most targeted links in the attack chain. Recent instances of supply chain attacks exemplify this shift, illustrating how adversaries have pivoted from exploiting software vulnerabilities to targeting human vulnerabilities through social engineering and phishing. Notably, the innovative use of generative AI, especially its ability to improve phishing emails, exemplifies a shift towards manipulating human behavior rather than exploiting technological weaknesses. As we reflect on 2023, it becomes evident that cyber threat actors possess the capabilities and resources to adapt their tactics in response to increased security measures such as multi-factor authentication (MFA). Looking ahead to 2024, the trend suggests that threats will persistently revolve around humans, compelling defenders to take a different approach to breaking the attack chain. So, what\'s on the horizon? The experts at Proofpoint provide insightful predictions for the next 12 months, shedding light on what security teams might encounter and the implications of these trends. 1. Cyber Heists: Casinos are Just the Tip of the Iceberg Cyber criminals are increasingly targeting digital supply chain vendors, with a heightened focus on security and identity providers. Aggressive social engineering tactics, including phishing campaigns, are becoming more prevalent. The Scattered Spider group, responsible for ransomware attacks on Las Vegas casinos, showcases the sophistication of these tactics. Phishing help desk employees for login credentials and bypassing MFA through phishing one-time password (OTP) codes are becoming standard practices. These tactics have extended to supply chain attacks, compromising identity provider (IDP) vendors to access valuable customer information. The forecast for 2024 includes the replication and widespread adoption of such aggressive social engineering tactics, broadening the scope of initial compromise attempts beyond the traditional edge device and file transfer appliances. 2. Generative AI: The Double-Edged Sword The explosive growth of generative AI tools like ChatGPT, FraudGPT and WormGPT bring both promise and peril, but the sky is not falling as far as cybersecurity is concerned. While large language models took the stage, the fear of misuse prompted the U.S. president to issue an executive order in October 2023. At the moment, threat actors are making bank doing other things. Why bother reinventing the model when it\'s working just fine? But they\'ll morph their TTPs when detection starts to improve in those areas. On the flip side, more vendors will start injecting AI and large language models into their products and processes to boost their security offerings. Across the globe, privacy watchdogs and customers alike will demand responsible AI policies from technology companies, which means we\'ll start seeing statements being published about responsible AI policies. Expect both spectacular failures and responsible AI policies to emerge. 3. Mobile Device Phishing: The Rise of Omni-Channel Tactics take Centre Stage A notable trend for 2023 was the dramatic increase in mobile device phishing and we expect this threat to rise even more in 2024. Threat actors are strategically redirecting victims to mobile interactions, exploiting the vulnerabilities inherent in mobile platforms. Conversational abuse, including conversational smishing, has experienced exponential growth. Multi-touch campaigns aim to lure users away from desktops to mobile devices, utilizing tactics like QR codes and fraudulent voice calls Ransomware Malware Tool Vulnerability Threat Mobile Prediction Prediction ChatGPT ChatGPT ★★★
AlienVault.webp 2023-10-16 10:00:00 Renforcement de la cybersécurité: multiplication de force et efficacité de sécurité
Strengthening Cybersecurity: Force multiplication and security efficiency
(lien direct)
In the ever-evolving landscape of cybersecurity, the battle between defenders and attackers has historically been marked by an asymmetrical relationship. Within the cybersecurity realm, asymmetry has characterized the relationship between those safeguarding digital assets and those seeking to exploit vulnerabilities. Even within this context, where attackers are typically at a resource disadvantage, data breaches have continued to rise year after year as cyber threats adapt and evolve and utilize asymmetric tactics to their advantage.  These include technologies and tactics such as artificial intelligence (AI), and advanced social engineering tools. To effectively combat these threats, companies must rethink their security strategies, concentrating their scarce resources more efficiently and effectively through the concept of force multiplication. Asymmetrical threats, in the world of cybersecurity, can be summed up as the inherent disparity between adversaries and the tactics employed by the weaker party to neutralize the strengths of the stronger one. The utilization of AI and similar tools further erodes the perceived advantages that organizations believe they gain through increased spending on sophisticated security measures. Recent data from InfoSecurity Magazine, referencing the 2023 Checkpoint study, reveals a disconcerting trend: global cyberattacks increased by 7% between Q1 2022 and Q1 2023. While not significant at first blush, a deeper analysis reveals a more disturbing trend specifically that of the use of AI.  AI\'s malicious deployment is exemplified in the following quote from their research: "...we have witnessed several sophisticated campaigns from cyber-criminals who are finding ways to weaponize legitimate tools for malicious gains." Furthermore, the report highlights: "Recent examples include using ChatGPT for code generation that can help less-skilled threat actors effortlessly launch cyberattacks." As threat actors continue to employ asymmetrical strategies to render organizations\' substantial and ever-increasing security investments less effective, organizations must adapt to address this evolving threat landscape. Arguably, one of the most effective methods to confront threat adaptation and asymmetric tactics is through the concept of force multiplication, which enhances relative effectiveness with fewer resources consumed thereby increasing the efficiency of the security dollar. Efficiency, in the context of cybersecurity, refers to achieving the greatest cumulative effect of cybersecurity efforts with the lowest possible expenditure of resources, including time, effort, and costs. While the concept of efficiency may seem straightforward, applying complex technological and human resources effectively and in an efficient manner in complex domains like security demands more than mere calculations. This subject has been studied, modeled, and debated within the military community for centuries. Military and combat efficiency, a domain with a long history of analysis, Tool Vulnerability Threat Studies Prediction ChatGPT ★★★
DarkReading.webp 2023-07-25 16:39:24 \\ 'fraudegpt \\' chatbot malveillant maintenant à vendre sur Dark Web
\\'FraudGPT\\' Malicious Chatbot Now for Sale on Dark Web
(lien direct)
L'offre générative basée sur l'abonnement et générateur AI se joint à une tendance croissante vers le "jailbreaking générateur d'IA" pour créer des outils de copie pour les cyberattaques.
The subscription-based, generative AI-driven offering joins a growing trend toward "generative AI jailbreaking" to create ChatGPT copycat tools for cyberattacks.
Tool Prediction ChatGPT ChatGPT ★★
AlienVault.webp 2023-06-21 10:00:00 Vers un SOC plus résilient: la puissance de l'apprentissage automatique
Toward a more resilient SOC: the power of machine learning
(lien direct)
A way to manage too much data To protect the business, security teams need to be able to detect and respond to threats fast. The problem is the average organization generates massive amounts of data every day. Information floods into the Security Operations Center (SOC) from network tools, security tools, cloud services, threat intelligence feeds, and other sources. Reviewing and analyzing all this data in a reasonable amount of time has become a task that is well beyond the scope of human efforts. AI-powered tools are changing the way security teams operate. Machine learning (which is a subset of artificial intelligence, or “AI”)—and in particular, machine learning-powered predictive analytics—are enhancing threat detection and response in the SOC by providing an automated way to quickly analyze and prioritize alerts. Machine learning in threat detection So, what is machine learning (ML)? In simple terms, it is a machine\'s ability to automate a learning process so it can perform tasks or solve problems without specifically being told do so. Or, as AI pioneer Arthur Samuel put it, “. . . to learn without explicitly being programmed.” ML algorithms are fed large amounts of data that they parse and learn from so they can make informed predictions on outcomes in new data. Their predictions improve with “training”–the more data an ML algorithm is fed, the more it learns, and thus the more accurate its baseline models become. While ML is used for various real-world purposes, one of its primary use cases in threat detection is to automate identification of anomalous behavior. The ML model categories most commonly used for these detections are: Supervised models learn by example, applying knowledge gained from existing labeled datasets and desired outcomes to new data. For example, a supervised ML model can learn to recognize malware. It does this by analyzing data associated with known malware traffic to learn how it deviates from what is considered normal. It can then apply this knowledge to recognize the same patterns in new data. ChatGPT and transformersUnsupervised models do not rely on labels but instead identify structure, relationships, and patterns in unlabeled datasets. They then use this knowledge to detect abnormalities or changes in behavior. For example: an unsupervised ML model can observe traffic on a network over a period of time, continuously learning (based on patterns in the data) what is “normal” behavior, and then investigating deviations, i.e., anomalous behavior. Large language models (LLMs), such as ChatGPT, are a type of generative AI that use unsupervised learning. They train by ingesting massive amounts of unlabeled text data. Not only can LLMs analyze syntax to find connections and patterns between words, but they can also analyze semantics. This means they can understand context and interpret meaning in existing data in order to create new content. Finally, reinforcement models, which more closely mimic human learning, are not given labeled inputs or outputs but instead learn and perfect strategies through trial and error. With ML, as with any data analysis tools, the accuracy of the output depends critically on the quality and breadth of the data set that is used as an input. types of machine learning A valuable tool for the SOC The SOC needs to be resilient in the face of an ever-changing threat landscape. Analysts have to be able to quickly understand which alerts to prioritize and which to ignore. Machine learning helps optimize security operations by making threat detection and response faster and more accurate. Malware Tool Threat Prediction Cloud ChatGPT ★★
knowbe4.webp 2023-05-23 13:00:00 Cyberheistnews Vol 13 # 21 [Double Trouble] 78% des victimes de ransomwares sont confrontées à plusieurs extensions en tendance effrayante
CyberheistNews Vol 13 #21 [Double Trouble] 78% of Ransomware Victims Face Multiple Extortions in Scary Trend
(lien direct)
CyberheistNews Vol 13 #21 CyberheistNews Vol 13 #21  |   May 23rd, 2023 [Double Trouble] 78% of Ransomware Victims Face Multiple Extortions in Scary Trend New data sheds light on how likely your organization will succumb to a ransomware attack, whether you can recover your data, and what\'s inhibiting a proper security posture. You have a solid grasp on what your organization\'s cybersecurity stance does and does not include. But is it enough to stop today\'s ransomware attacks? CyberEdge\'s 2023 Cyberthreat Defense Report provides some insight into just how prominent ransomware attacks are and what\'s keeping orgs from stopping them. According to the report, in 2023: 7% of organizations were victims of a ransomware attack 7% of those paid a ransom 73% were able to recover data Only 21.6% experienced solely the encryption of data and no other form of extortion It\'s this last data point that interests me. Nearly 78% of victim organizations experienced one or more additional forms of extortion. CyberEdge mentions threatening to publicly release data, notifying customers or media, and committing a DDoS attack as examples of additional threats mentioned by respondents. IT decision makers were asked to rate on a scale of 1-5 (5 being the highest) what were the top inhibitors of establishing and maintaining an adequate defense. The top inhibitor (with an average rank of 3.66) was a lack of skilled personnel – we\'ve long known the cybersecurity industry is lacking a proper pool of qualified talent. In second place, with an average ranking of 3.63, is low security awareness among employees – something only addressed by creating a strong security culture with new-school security awareness training at the center of it all. Blog post with links:https://blog.knowbe4.com/ransomware-victim-threats [Free Tool] Who Will Fall Victim to QR Code Phishing Attacks? Bad actors have a new way to launch phishing attacks to your users: weaponized QR codes. QR code phishing is especially dangerous because there is no URL to check and messages bypass traditional email filters. With the increased popularity of QR codes, users are more at Ransomware Hack Tool Vulnerability Threat Prediction ChatGPT ★★
knowbe4.webp 2023-05-09 13:00:00 Cyberheistnews Vol 13 # 19 [Watch Your Back] Nouvelle fausse erreur de mise à jour Chrome Attaque cible vos utilisateurs
CyberheistNews Vol 13 #19 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users
(lien direct)
CyberheistNews Vol 13 #19 CyberheistNews Vol 13 #19  |   May 9th, 2023 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users Compromised websites (legitimate sites that have been successfully compromised to support social engineering) are serving visitors fake Google Chrome update error messages. "Google Chrome users who use the browser regularly should be wary of a new attack campaign that distributes malware by posing as a Google Chrome update error message," Trend Micro warns. "The attack campaign has been operational since February 2023 and has a large impact area." The message displayed reads, "UPDATE EXCEPTION. An error occurred in Chrome automatic update. Please install the update package manually later, or wait for the next automatic update." A link is provided at the bottom of the bogus error message that takes the user to what\'s misrepresented as a link that will support a Chrome manual update. In fact the link will download a ZIP file that contains an EXE file. The payload is a cryptojacking Monero miner. A cryptojacker is bad enough since it will drain power and degrade device performance. This one also carries the potential for compromising sensitive information, particularly credentials, and serving as staging for further attacks. This campaign may be more effective for its routine, innocent look. There are no spectacular threats, no promises of instant wealth, just a notice about a failed update. Users can become desensitized to the potential risks bogus messages concerning IT issues carry with them. Informed users are the last line of defense against attacks like these. New school security awareness training can help any organization sustain that line of defense and create a strong security culture. Blog post with links:https://blog.knowbe4.com/fake-chrome-update-error-messages A Master Class on IT Security: Roger A. Grimes Teaches You Phishing Mitigation Phishing attacks have come a long way from the spray-and-pray emails of just a few decades ago. Now they\'re more targeted, more cunning and more dangerous. And this enormous security gap leaves you open to business email compromise, session hijacking, ransomware and more. Join Roger A. Grimes, KnowBe4\'s Data-Driven Defense Evangelist, Ransomware Data Breach Spam Malware Tool Threat Prediction NotPetya NotPetya APT 28 ChatGPT ChatGPT ★★
AlienVault.webp 2023-05-01 10:00:00 Le rôle de l'IA dans les soins de santé: révolutionner l'industrie des soins de santé
The role of AI in healthcare: Revolutionizing the healthcare industry
(lien direct)
Le contenu de ce post est uniquement la responsabilité de l'auteur. & nbsp;AT & amp; t n'adopte ni n'approuve aucune des vues, des positions ou des informations fournies par l'auteur dans cet article. & Nbsp; Introduction L'intelligence artificielle (AI) est le mimétisme de certains aspects du comportement humain tels que le traitement du langage et la prise de décision en utilisant de grands modèles de langage (LLM) et le traitement du langage naturel (PNL). Les LLM sont un type spécifique d'IA qui analyse et génèrent un langage naturel à l'aide d'algorithmes d'apprentissage en profondeur.Les programmes d'IA sont faits pour penser comme les humains et imiter leurs actions sans être biaisés ou influencés par les émotions. LLMS fournit des systèmes pour traiter les grands ensembles de données et fournir une vue plus claire de la tâche à accomplir.L'IA peut être utilisée pour identifier les modèles, analyser les données et faire des prédictions basées sur les données qui leur sont fournies.Il peut être utilisé comme chatbots, assistants virtuels, traduction du langage et systèmes de traitement d'image. Certains principaux fournisseurs d'IA sont des chatppt par Open AI, Bard par Google, Bing AI par Microsoft et Watson AI par IBM.L'IA a le potentiel de révolutionner diverses industries, notamment le transport, la finance, les soins de santé et plus encore en prenant des décisions rapides, précises et éclairées avec l'aide de grands ensembles de données.Dans cet article, nous parlerons de certaines applications de l'IA dans les soins de santé. Applications de l'IA dans les soins de santé Il existe plusieurs applications de l'IA qui ont été mises en œuvre dans le secteur des soins de santé qui s'est avérée très réussie. Certains exemples sont: Imagerie médicale: Les algorithmes AI sont utilisés pour analyser des images médicales telles que les rayons X, les analyses d'IRM et les tomodensitométrie.Les algorithmes d'IA peuvent aider les radiologues à identifier les anomalies - aider les radiologues à faire des diagnostics plus précis.Par exemple, Google & rsquo; S AI Powered DeepMind a montré une précision similaire par rapport aux radiologues humains dans l'identification du cancer du sein. & nbsp; Médecine personnalisée: L'IA peut être utilisée pour générer des informations sur les biomarqueurs, les informations génétiques, les allergies et les évaluations psychologiques pour personnaliser le meilleur traitement des patients. . Ces données peuvent être utilisées pour prédire comment le patient réagira à divers cours de traitement pour une certaine condition.Cela peut minimiser les effets indésirables et réduire les coûts des options de traitement inutiles ou coûteuses.De même, il peut être utilisé pour traiter les troubles génétiques avec des plans de traitement personnalisés.Par exemple, Genomics profonde est une entreprise utilisant des systèmes d'IA pour développer des traitements personnalisés pour les troubles génétiques. Diagnostic de la maladie: Les systèmes d'IA peuvent être utilisés pour analyser les données des patients, y compris les antécédents médicaux et les résultats des tests pour établir un diagnostic plus précis et précoce des conditions mortelles comme le cancer.Par exemple, Pfizer a collaboré avec différents services basés sur l'IA pour diagnostiquer les maladies et IBM Watson utilise les PNL et les algorithmes d'apprentissage automatique pour l'oncologie dans l'élaboration de plans de traitement pour les patients atteints de cancer. Découverte de médicaments: L'IA peut être utilisée en R & amp; D pour la découverte de médicaments, ce qui rend le processus plus rapidement.L'IA peut supprimer certaines Prediction Medical ChatGPT ChatGPT ★★
knowbe4.webp 2023-02-28 14:00:00 CyberheistNews Vol 13 #09 [Eye Opener] Should You Click on Unsubscribe? (lien direct) CyberheistNews Vol 13 #09 CyberheistNews Vol 13 #09  |   February 28th, 2023 [Eye Opener] Should You Click on Unsubscribe? By Roger A. Grimes. Some common questions we get are "Should I click on an unwanted email's 'Unsubscribe' link? Will that lead to more or less unwanted email?" The short answer is that, in general, it is OK to click on a legitimate vendor's unsubscribe link. But if you think the email is sketchy or coming from a source you would not want to validate your email address as valid and active, or are unsure, do not take the chance, skip the unsubscribe action. In many countries, legitimate vendors are bound by law to offer (free) unsubscribe functionality and abide by a user's preferences. For example, in the U.S., the 2003 CAN-SPAM Act states that businesses must offer clear instructions on how the recipient can remove themselves from the involved mailing list and that request must be honored within 10 days. Note: Many countries have laws similar to the CAN-SPAM Act, although with privacy protection ranging the privacy spectrum from very little to a lot more protection. The unsubscribe feature does not have to be a URL link, but it does have to be an "internet-based way." The most popular alternative method besides a URL link is an email address to use. In some cases, there are specific instructions you have to follow, such as put "Unsubscribe" in the subject of the email. Other times you are expected to craft your own message. Luckily, most of the time simply sending any email to the listed unsubscribe email address is enough to remove your email address from the mailing list. [CONTINUED] at the KnowBe4 blog:https://blog.knowbe4.com/should-you-click-on-unsubscribe [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, March 1, @ 2:00 PM (ET), for a live demo of how KnowBe4 introduces a new-school approac Malware Hack Tool Vulnerability Threat Guideline Prediction APT 38 ChatGPT ★★★
silicon.fr.webp 2023-02-07 08:09:44 ChatGPT : la bataille de la recherche web a commencé (lien direct) Google, Baidu et Microsoft lui-même élargissent leur communication sur leurs stratégies respectives " IA + recherche web ". Prediction ChatGPT ★★★
Chercheur.webp 2023-01-10 12:18:55 ChatGPT-Written Malware (lien direct) I don’t know how much of a thing this will end up being, but we are seeing ChatGPT-written malware in the wild. …within a few weeks of ChatGPT going live, participants in cybercrime forums—­some with little or no coding experience­—were using it to write software and emails that could be used for espionage, ransomware, malicious spam, and other malicious tasks. “It's still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web,” company researchers wrote. “However, the cybercriminal community has already shown significant interest and are jumping into this latest trend to generate malicious code.”... Malware Tool Prediction ChatGPT ★★
silicon.fr.webp 2022-12-20 13:42:02 Codex, ChatGPT… OpenAI, une usine à cyberattaques ? (lien direct) Vers des cyberattaques pilotées par IA de A à Z ? Trend Micro s'est intéressé à cette possibilité avec deux outils d'OpenAI : Codex et ChatGPT. Prediction ChatGPT ★★
Last update at: 2024-05-19 17:08:07
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter