What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
News.webp 2024-02-28 01:45:07 Openai affirme que le New York Times a payé quelqu'un à \\ 'hack \\' chatppt
OpenAI claims New York Times paid someone to \\'hack\\' ChatGPT
(lien direct)
Super Lab affirme \\ 'Deceptive Invite \' qu'elle a joyeusement traité - et peut avoir suivi - Weren \\ 't Fair, donc l'affaire devrait être rejetée Openai a accusé le New York TimesCompagnie de payer quelqu'un pour "pirater" le chatppt pour générer des paragraphes textuels des articles de son journal.Par piratage, vraisemblablement le biz signifie: connecté comme d'habitude et lui a posé des questions ennuyeuses.…
Super lab claims \'deceptive prompts\' that it happily processed - and may have tracked - weren\'t fair, so case should be dismissed OpenAI has accused The New York Times Company of paying someone to "hack" ChatGPT to generate verbatim paragraphs from articles in its newspaper. By hack, presumably the biz means: Logged in as normal and asked it annoying questions.…
Hack ChatGPT ★★★
ESET.webp 2024-01-31 10:30:00 ESET Research Podcast: Chatgpt, The Moveit Hack et Pandora
ESET Research Podcast: ChatGPT, the MOVEit hack, and Pandora
(lien direct)
Un chatbot AI Kindle par inadvertance un boom de la cybercriminalité, des bandits de ransomware pluncent des organisations sans déploiement
An AI chatbot inadvertently kindles a cybercrime boom, ransomware bandits plunder organizations without deploying ransomware, and a new botnet enslaves Android TV boxes
Ransomware Hack Mobile ChatGPT ★★★
BBC.webp 2023-12-07 00:04:10 Chatgpt Builder aide à créer des campagnes d'arnaque et de piratage
ChatGPT builder helps create scam and hack campaigns
(lien direct)
Un outil de pointe de l'IA ouverte semble mal modéré, ce qui lui permet d'être abusé par les cybercriminels.
A cutting-edge tool from Open AI appears to be poorly moderated, allowing it to be abused by cyber-criminals.
Hack Tool ChatGPT ★★
CS.webp 2023-08-18 16:11:17 Cinquante minutes pour pirater le chatppt: à l'intérieur du concours de con, à casser AI
Fifty minutes to hack ChatGPT: Inside the DEF CON competition to break AI
(lien direct)
> Plus de 2 000 pirates ont attaqué des chatbots de pointe pour découvrir les vulnérabilités - et ont démontré les défis de l'IA de l'équipe rouge.
>More than 2,000 hackers attacked cutting-edge chatbots to discover vulnerabilities - and demonstrated the challenges for red-teaming AI.
Hack Vulnerability ChatGPT ★★
knowbe4.webp 2023-06-27 13:00:00 Cyberheistnews Vol 13 # 26 [Eyes Open] La FTC révèle les cinq dernières escroqueries par SMS
CyberheistNews Vol 13 #26 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams
(lien direct)
CyberheistNews Vol 13 #26 CyberheistNews Vol 13 #26  |   June 27th, 2023 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams The U.S. Federal Trade Commission (FTC) has published a data spotlight outlining the most common text message scams. Phony bank fraud prevention alerts were the most common type of text scam last year. "Reports about texts impersonating banks are up nearly tenfold since 2019 with median reported individual losses of $3,000 last year," the report says. These are the top five text scams reported by the FTC: Copycat bank fraud prevention alerts Bogus "gifts" that can cost you Fake package delivery problems Phony job offers Not-really-from-Amazon security alerts "People get a text supposedly from a bank asking them to call a number ASAP about suspicious activity or to reply YES or NO to verify whether a transaction was authorized. If they reply, they\'ll get a call from a phony \'fraud department\' claiming they want to \'help get your money back.\' What they really want to do is make unauthorized transfers. "What\'s more, they may ask for personal information like Social Security numbers, setting people up for possible identity theft." Fake gift card offers took second place, followed by phony package delivery problems. "Scammers understand how our shopping habits have changed and have updated their sleazy tactics accordingly," the FTC says. "People may get a text pretending to be from the U.S. Postal Service, FedEx, or UPS claiming there\'s a problem with a delivery. "The text links to a convincing-looking – but utterly bogus – website that asks for a credit card number to cover a small \'redelivery fee.\'" Scammers also target job seekers with bogus job offers in an attempt to steal their money and personal information. "With workplaces in transition, some scammers are using texts to perpetrate old-school forms of fraud – for example, fake \'mystery shopper\' jobs or bogus money-making offers for driving around with cars wrapped in ads," the report says. "Other texts target people who post their resumes on employment websites. They claim to offer jobs and even send job seekers checks, usually with instructions to send some of the money to a different address for materials, training, or the like. By the time the check bounces, the person\'s money – and the phony \'employer\' – are long gone." Finally, scammers impersonate Amazon and send fake security alerts to trick victims into sending money. "People may get what looks like a message from \'Amazon,\' asking to verify a big-ticket order they didn\'t place," the FTC says. "Concerned Ransomware Spam Malware Hack Tool Threat FedEx APT 28 APT 15 ChatGPT ChatGPT ★★
knowbe4.webp 2023-06-20 13:00:00 Cyberheistnews Vol 13 # 25 [empreintes digitales partout] Les informations d'identification volées sont la cause profonde n ° 1 des violations de données
CyberheistNews Vol 13 #25 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches
(lien direct)
CyberheistNews Vol 13 #25 CyberheistNews Vol 13 #25  |   June 20th, 2023 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches Verizon\'s DBIR always has a lot of information to unpack, so I\'ll continue my review by covering how stolen credentials play a role in attacks. This year\'s Data Breach Investigations Report has nearly 1 million incidents in their data set, making it the most statistically relevant set of report data anywhere. So, what does the report say about the most common threat actions that are involved in data breaches? Overall, the use of stolen credentials is the overwhelming leader in data breaches, being involved in nearly 45% of breaches – this is more than double the second-place spot of "Other" (which includes a number of types of threat actions) and ransomware, which sits at around 20% of data breaches. According to Verizon, stolen credentials were the "most popular entry point for breaches." As an example, in Basic Web Application Attacks, the use of stolen credentials was involved in 86% of attacks. The prevalence of credential use should come as no surprise, given the number of attacks that have focused on harvesting online credentials to provide access to both cloud platforms and on-premises networks alike. And it\'s the social engineering attacks (whether via phish, vish, SMiSh, or web) where these credentials are compromised - something that can be significantly diminished by engaging users in security awareness training to familiarize them with common techniques and examples of attacks, so when they come across an attack set on stealing credentials, the user avoids becoming a victim. Blog post with links:https://blog.knowbe4.com/stolen-credentials-top-breach-threat [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters without ever l Ransomware Data Breach Spam Malware Hack Vulnerability Threat Cloud ChatGPT ChatGPT ★★
knowbe4.webp 2023-05-31 13:00:00 Cyberheistnews Vol 13 # 22 [Eye on Fraud] Un examen plus approfondi de la hausse massive de 72% des attaques de phishing financier
CyberheistNews Vol 13 #22 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks
(lien direct)
CyberheistNews Vol 13 #22 CyberheistNews Vol 13 #22  |   May 31st, 2023 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks With attackers knowing financial fraud-based phishing attacks are best suited for the one industry where the money is, this massive spike in attacks should both surprise you and not surprise you at all. When you want tires, where do you go? Right – to the tire store. Shoes? Yup – shoe store. The most money you can scam from a single attack? That\'s right – the financial services industry, at least according to cybersecurity vendor Armorblox\'s 2023 Email Security Threat Report. According to the report, the financial services industry as a target has increased by 72% over 2022 and was the single largest target of financial fraud attacks, representing 49% of all such attacks. When breaking down the specific types of financial fraud, it doesn\'t get any better for the financial industry: 51% of invoice fraud attacks targeted the financial services industry 42% were payroll fraud attacks 63% were payment fraud To make matters worse, nearly one-quarter (22%) of financial fraud attacks successfully bypassed native email security controls, according to Armorblox. That means one in five email-based attacks made it all the way to the Inbox. The next layer in your defense should be a user that\'s properly educated using security awareness training to easily identify financial fraud and other phishing-based threats, stopping them before they do actual damage. Blog post with links:https://blog.knowbe4.com/financial-fraud-phishing [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us Wednesday, June 7, @ 2:00 PM (ET), for a live demonstration of how KnowBe4 introduces a new-school approach to security awareness training and simulated phishing. Get a look at THREE NEW FEATURES and see how easy it is to train and phish your users. Ransomware Malware Hack Tool Threat Conference Uber ChatGPT ChatGPT Guam ★★
knowbe4.webp 2023-05-23 13:00:00 Cyberheistnews Vol 13 # 21 [Double Trouble] 78% des victimes de ransomwares sont confrontées à plusieurs extensions en tendance effrayante
CyberheistNews Vol 13 #21 [Double Trouble] 78% of Ransomware Victims Face Multiple Extortions in Scary Trend
(lien direct)
CyberheistNews Vol 13 #21 CyberheistNews Vol 13 #21  |   May 23rd, 2023 [Double Trouble] 78% of Ransomware Victims Face Multiple Extortions in Scary Trend New data sheds light on how likely your organization will succumb to a ransomware attack, whether you can recover your data, and what\'s inhibiting a proper security posture. You have a solid grasp on what your organization\'s cybersecurity stance does and does not include. But is it enough to stop today\'s ransomware attacks? CyberEdge\'s 2023 Cyberthreat Defense Report provides some insight into just how prominent ransomware attacks are and what\'s keeping orgs from stopping them. According to the report, in 2023: 7% of organizations were victims of a ransomware attack 7% of those paid a ransom 73% were able to recover data Only 21.6% experienced solely the encryption of data and no other form of extortion It\'s this last data point that interests me. Nearly 78% of victim organizations experienced one or more additional forms of extortion. CyberEdge mentions threatening to publicly release data, notifying customers or media, and committing a DDoS attack as examples of additional threats mentioned by respondents. IT decision makers were asked to rate on a scale of 1-5 (5 being the highest) what were the top inhibitors of establishing and maintaining an adequate defense. The top inhibitor (with an average rank of 3.66) was a lack of skilled personnel – we\'ve long known the cybersecurity industry is lacking a proper pool of qualified talent. In second place, with an average ranking of 3.63, is low security awareness among employees – something only addressed by creating a strong security culture with new-school security awareness training at the center of it all. Blog post with links:https://blog.knowbe4.com/ransomware-victim-threats [Free Tool] Who Will Fall Victim to QR Code Phishing Attacks? Bad actors have a new way to launch phishing attacks to your users: weaponized QR codes. QR code phishing is especially dangerous because there is no URL to check and messages bypass traditional email filters. With the increased popularity of QR codes, users are more at Ransomware Hack Tool Vulnerability Threat Prediction ChatGPT ★★
knowbe4.webp 2023-05-02 13:00:00 Cyberheistnews Vol 13 # 18 [Eye on Ai] Chatgpt a-t-il la cybersécurité indique-t-elle?
CyberheistNews Vol 13 #18 [Eye on AI] Does ChatGPT Have Cybersecurity Tells?
(lien direct)
CyberheistNews Vol 13 #18 CyberheistNews Vol 13 #18  |   May 2nd, 2023 [Eye on AI] Does ChatGPT Have Cybersecurity Tells? Poker players and other human lie detectors look for "tells," that is, a sign by which someone might unwittingly or involuntarily reveal what they know, or what they intend to do. A cardplayer yawns when they\'re about to bluff, for example, or someone\'s pupils dilate when they\'ve successfully drawn a winning card. It seems that artificial intelligence (AI) has its tells as well, at least for now, and some of them have become so obvious and so well known that they\'ve become internet memes. "ChatGPT and GPT-4 are already flooding the internet with AI-generated content in places famous for hastily written inauthentic content: Amazon user reviews and Twitter," Vice\'s Motherboard observes, and there are some ways of interacting with the AI that lead it into betraying itself for what it is. "When you ask ChatGPT to do something it\'s not supposed to do, it returns several common phrases. When I asked ChatGPT to tell me a dark joke, it apologized: \'As an AI language model, I cannot generate inappropriate or offensive content,\' it said. Those two phrases, \'as an AI language model\' and \'I cannot generate inappropriate content,\' recur so frequently in ChatGPT generated content that they\'ve become memes." That happy state of easy detection, however, is unlikely to endure. As Motherboard points out, these tells are a feature of "lazily executed" AI. With a little more care and attention, they\'ll grow more persuasive. One risk of the AI language models is that they can be adapted to perform social engineering at scale. In the near term, new-school security awareness training can help alert your people to the tells of automated scamming. And in the longer term, that training will adapt and keep pace with the threat as it evolves. Blog post with links:https://blog.knowbe4.com/chatgpt-cybersecurity-tells [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, May 3, @ 2:00 PM (ET), for a live demonstration of how KnowBe4 Ransomware Malware Hack Threat ChatGPT ChatGPT ★★
knowbe4.webp 2023-04-25 13:00:00 Cyberheistnews Vol 13 # 17 [Head Start] Méthodes efficaces Comment enseigner l'ingénierie sociale à une IA
CyberheistNews Vol 13 #17 [Head Start] Effective Methods How To Teach Social Engineering to an AI
(lien direct)
CyberheistNews Vol 13 #17 CyberheistNews Vol 13 #16  |   April 18th, 2023 [Finger on the Pulse]: How Phishers Leverage Recent AI Buzz Curiosity leads people to suspend their better judgment as a new campaign of credential theft exploits a person\'s excitement about the newest AI systems not yet available to the general public. On Tuesday morning, April 11th, Veriti explained that several unknown actors are making false Facebook ads which advertise a free download of AIs like ChatGPT and Google Bard. Veriti writes "These posts are designed to appear legitimate, using the buzz around OpenAI language models to trick unsuspecting users into downloading the files. However, once the user downloads and extracts the file, the Redline Stealer (aka RedStealer) malware is activated and is capable of stealing passwords and downloading further malware onto the user\'s device." Veriti describes the capabilities of the Redline Stealer malware which, once downloaded, can take sensitive information like credit card numbers, passwords, and personal information like user location, and hardware. Veriti added "The malware can upload and download files, execute commands, and send back data about the infected computer at regular intervals." Experts recommend using official Google or OpenAI websites to learn when their products will be available and only downloading files from reputable sources. With the rising use of Google and Facebook ads as attack vectors experts also suggest refraining from clicking on suspicious advertisements promising early access to any product on the Internet. Employees can be helped to develop sound security habits like these by stepping them through monthly social engineering simulations. Blog post with links:https://blog.knowbe4.com/ai-hype-used-for-phishbait [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters with Spam Malware Hack Threat APT 28 ChatGPT ChatGPT ★★★
knowbe4.webp 2023-04-18 13:00:00 Cyberheistnews Vol 13 # 16 [doigt sur le pouls]: comment les phishers tirent parti de l'IA récent Buzz
CyberheistNews Vol 13 #16 [Finger on the Pulse]: How Phishers Leverage Recent AI Buzz
(lien direct)
CyberheistNews Vol 13 #16 CyberheistNews Vol 13 #16  |   April 18th, 2023 [Finger on the Pulse]: How Phishers Leverage Recent AI Buzz Curiosity leads people to suspend their better judgment as a new campaign of credential theft exploits a person\'s excitement about the newest AI systems not yet available to the general public. On Tuesday morning, April 11th, Veriti explained that several unknown actors are making false Facebook ads which advertise a free download of AIs like ChatGPT and Google Bard. Veriti writes "These posts are designed to appear legitimate, using the buzz around OpenAI language models to trick unsuspecting users into downloading the files. However, once the user downloads and extracts the file, the Redline Stealer (aka RedStealer) malware is activated and is capable of stealing passwords and downloading further malware onto the user\'s device." Veriti describes the capabilities of the Redline Stealer malware which, once downloaded, can take sensitive information like credit card numbers, passwords, and personal information like user location, and hardware. Veriti added "The malware can upload and download files, execute commands, and send back data about the infected computer at regular intervals." Experts recommend using official Google or OpenAI websites to learn when their products will be available and only downloading files from reputable sources. With the rising use of Google and Facebook ads as attack vectors experts also suggest refraining from clicking on suspicious advertisements promising early access to any product on the Internet. Employees can be helped to develop sound security habits like these by stepping them through monthly social engineering simulations. Blog post with links:https://blog.knowbe4.com/ai-hype-used-for-phishbait [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters without ever leav Spam Malware Hack Threat APT 28 ChatGPT ChatGPT ★★★
knowbe4.webp 2023-04-11 13:16:54 Cyberheistnews Vol 13 # 15 [Le nouveau visage de la fraude] FTC fait la lumière sur les escroqueries d'urgence familiale améliorées AI-AI
CyberheistNews Vol 13 #15 [The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams
(lien direct)
CyberheistNews Vol 13 #15 CyberheistNews Vol 13 #15  |   April 11th, 2023 [The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams The Federal Trade Commission is alerting consumers about a next-level, more sophisticated family emergency scam that uses AI which imitates the voice of a "family member in distress." They started out with: "You get a call. There\'s a panicked voice on the line. It\'s your grandson. He says he\'s in deep trouble - he wrecked the car and landed in jail. But you can help by sending money. You take a deep breath and think. You\'ve heard about grandparent scams. But darn, it sounds just like him. How could it be a scam? Voice cloning, that\'s how." "Don\'t Trust The Voice" The FTC explains: "Artificial intelligence is no longer a far-fetched idea out of a sci-fi movie. We\'re living with it, here and now. A scammer could use AI to clone the voice of your loved one. All he needs is a short audio clip of your family member\'s voice - which he could get from content posted online - and a voice-cloning program. When the scammer calls you, he\'ll sound just like your loved one. "So how can you tell if a family member is in trouble or if it\'s a scammer using a cloned voice? Don\'t trust the voice. Call the person who supposedly contacted you and verify the story. Use a phone number you know is theirs. If you can\'t reach your loved one, try to get in touch with them through another family member or their friends." Full text of the alert is at the FTC website. Share with friends, family and co-workers:https://blog.knowbe4.com/the-new-face-of-fraud-ftc-sheds-light-on-ai-enhanced-family-emergency-scams A Master Class on IT Security: Roger A. Grimes Teaches Ransomware Mitigation Cybercriminals have become thoughtful about ransomware attacks; taking time to maximize your organization\'s potential damage and their payoff. Protecting your network from this growing threat is more important than ever. And nobody knows this more than Roger A. Grimes, Data-Driven Defense Evangelist at KnowBe4. With 30+ years of experience as a computer security consultant, instructor, and award-winning author, Roger has dedicated his life to making Ransomware Data Breach Spam Malware Hack Tool Threat ChatGPT ChatGPT ★★
knowbe4.webp 2023-04-04 13:00:00 CyberheistNews Vol 13 # 14 [Eyes sur le prix] Comment les inconvénients croissants ont tenté un courteur par e-mail de 36 millions de vendeurs
CyberheistNews Vol 13 #14 [Eyes on the Prize] How Crafty Cons Attempted a 36 Million Vendor Email Heist
(lien direct)
CyberheistNews Vol 13 #14 CyberheistNews Vol 13 #14  |   April 4th, 2023 [Eyes on the Prize] How Crafty Cons Attempted a 36 Million Vendor Email Heist The details in this thwarted VEC attack demonstrate how the use of just a few key details can both establish credibility and indicate the entire thing is a scam. It\'s not every day you hear about a purely social engineering-based scam taking place that is looking to run away with tens of millions of dollars. But, according to security researchers at Abnormal Security, cybercriminals are becoming brazen and are taking their shots at very large prizes. This attack begins with a case of VEC – where a domain is impersonated. In the case of this attack, the impersonated vendor\'s domain (which had a .com top level domain) was replaced with a matching .cam domain (.cam domains are supposedly used for photography enthusiasts, but there\'s the now-obvious problem with it looking very much like .com to the cursory glance). The email attaches a legitimate-looking payoff letter complete with loan details. According to Abnormal Security, nearly every aspect of the request looked legitimate. The telltale signs primarily revolved around the use of the lookalike domain, but there were other grammatical mistakes (that can easily be addressed by using an online grammar service or ChatGPT). This attack was identified well before it caused any damage, but the social engineering tactics leveraged were nearly enough to make this attack successful. Security solutions will help stop most attacks, but for those that make it past scanners, your users need to play a role in spotting and stopping BEC, VEC and phishing attacks themselves – something taught through security awareness training combined with frequent simulated phishing and other social engineering tests. Blog post with screenshots and links:https://blog.knowbe4.com/36-mil-vendor-email-compromise-attack [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, April 5, @ 2:00 PM (ET), for a live demo of how KnowBe4 i Ransomware Malware Hack Threat ChatGPT ChatGPT APT 43 ★★
knowbe4.webp 2023-03-28 13:00:00 Cyberheistnews Vol 13 # 13 [Oeil Overner] Comment déjouer les attaques de phishing basées sur l'IA sournoises [CyberheistNews Vol 13 #13 [Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks] (lien direct) CyberheistNews Vol 13 #13 CyberheistNews Vol 13 #13  |   March 28th, 2023 [Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks Users need to adapt to an evolving threat landscape in which attackers can use AI tools like ChatGPT to craft extremely convincing phishing emails, according to Matthew Tyson at CSO. "A leader tasked with cybersecurity can get ahead of the game by understanding where we are in the story of machine learning (ML) as a hacking tool," Tyson writes. "At present, the most important area of relevance around AI for cybersecurity is content generation. "This is where machine learning is making its greatest strides and it dovetails nicely for hackers with vectors such as phishing and malicious chatbots. The capacity to craft compelling, well-formed text is in the hands of anyone with access to ChatGPT, and that\'s basically anyone with an internet connection." Tyson quotes Conal Gallagher, CIO and CISO at Flexera, as saying that since attackers can now write grammatically correct phishing emails, users will need to pay attention to the circumstances of the emails. "Looking for bad grammar and incorrect spelling is a thing of the past - even pre-ChatGPT phishing emails have been getting more sophisticated," Gallagher said. "We must ask: \'Is the email expected? Is the from address legit? Is the email enticing you to click on a link?\' Security awareness training still has a place to play here." Tyson explains that technical defenses have become very effective, so attackers focus on targeting humans to bypass these measures. "Email and other elements of software infrastructure offer built-in fundamental security that largely guarantees we are not in danger until we ourselves take action," Tyson writes. "This is where we can install a tripwire in our mindsets: we should be hyper aware of what it is we are acting upon when we act upon it. "Not until an employee sends a reply, runs an attachment, or fills in a form is sensitive information at risk. The first ring of defense in our mentality should be: \'Is the content I\'m looking at legit, not just based on its internal aspects, but given the entire context?\' The second ring of defense in our mentality then has to be, \'Wait! I\'m being asked to do something here.\'" New-school security awareness training with simulated phishing tests enables your employees to recognize increasingly sophisticated phishing attacks and builds a strong security culture. Remember: Culture eats strategy for breakfast and is always top-down. Blog post with links:https://blog.knowbe4.com/identifying-ai-enabled-phishing Ransomware Malware Hack Tool Threat Guideline ChatGPT ChatGPT ★★★
no_ico.webp 2023-03-09 21:19:11 New Rise In ChatGPT Scams Reported By Fraudsters (lien direct) Since the release of ChatGPT, the cybersecurity company Darktrace has issued a warning, claiming that a rise in criminals utilizing artificial intelligence to craft more intricate schemes to defraud employees and hack into organizations has been observed. The Cambridge-based corporation said that AI further enabled “hacktivist” cyberattacks employing ransomware to extract money from businesses. The […] Ransomware Hack ChatGPT ChatGPT ★★
knowbe4.webp 2023-02-28 14:00:00 CyberheistNews Vol 13 #09 [Eye Opener] Should You Click on Unsubscribe? (lien direct) CyberheistNews Vol 13 #09 CyberheistNews Vol 13 #09  |   February 28th, 2023 [Eye Opener] Should You Click on Unsubscribe? By Roger A. Grimes. Some common questions we get are "Should I click on an unwanted email's 'Unsubscribe' link? Will that lead to more or less unwanted email?" The short answer is that, in general, it is OK to click on a legitimate vendor's unsubscribe link. But if you think the email is sketchy or coming from a source you would not want to validate your email address as valid and active, or are unsure, do not take the chance, skip the unsubscribe action. In many countries, legitimate vendors are bound by law to offer (free) unsubscribe functionality and abide by a user's preferences. For example, in the U.S., the 2003 CAN-SPAM Act states that businesses must offer clear instructions on how the recipient can remove themselves from the involved mailing list and that request must be honored within 10 days. Note: Many countries have laws similar to the CAN-SPAM Act, although with privacy protection ranging the privacy spectrum from very little to a lot more protection. The unsubscribe feature does not have to be a URL link, but it does have to be an "internet-based way." The most popular alternative method besides a URL link is an email address to use. In some cases, there are specific instructions you have to follow, such as put "Unsubscribe" in the subject of the email. Other times you are expected to craft your own message. Luckily, most of the time simply sending any email to the listed unsubscribe email address is enough to remove your email address from the mailing list. [CONTINUED] at the KnowBe4 blog:https://blog.knowbe4.com/should-you-click-on-unsubscribe [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, March 1, @ 2:00 PM (ET), for a live demo of how KnowBe4 introduces a new-school approac Malware Hack Tool Vulnerability Threat Guideline Prediction APT 38 ChatGPT ★★★
News.webp 2023-02-22 20:30:12 No, ChatGPT didn\'t win a hacking competition prize…yet (lien direct) $20k Pwn2Own prize for the humans, zero for the AI It was bound to happen sooner or later. For the first time ever, bug hunters used ChatGPT in a successful Pwn2Own exploit, helping the researchers to hack software used in industrial applications and win $20,000.… Hack Industrial ChatGPT ★★★
knowbe4.webp 2023-02-21 14:00:00 CyberheistNews Vol 13 #08 [Heads Up] Reddit Is the Latest Victim of a Spear Phishing Attack Resulting in a Data Breach (lien direct) CyberheistNews Vol 13 #08 CyberheistNews Vol 13 #08  |   February 21st, 2023 [Heads Up] Reddit Is the Latest Victim of a Spear Phishing Attack Resulting in a Data Breach There is a lot to learn from Reddit's recent data breach, which was the result of an employee falling for a "sophisticated and highly-targeted" spear phishing attack. I spend a lot of time talking about phishing attacks and the specifics that closely surround that pivotal action taken by the user once they are duped into believing the phishing email was legitimate. However, there are additional details about the attack we can analyze to see what kind of access the attacker was able to garner from this attack. But first, here are the basics: According to Reddit, an attacker set up a website that impersonated the company's intranet gateway, then sent targeted phishing emails to Reddit employees. The site was designed to steal credentials and two-factor authentication tokens. There are only a few details from the breach, but the notification does mention that the threat actor was able to access "some internal docs, code, as well as some internal dashboards and business systems." Since the notice does imply that only a single employee fell victim, we have to make a few assumptions about this attack: The attacker had some knowledge of Reddit's internal workings – The fact that the attacker can spoof an intranet gateway shows they had some familiarity with the gateway's look and feel, and its use by Reddit employees. The targeting of victims was limited to users with specific desired access – Given the knowledge about the intranet, it's reasonable to believe that the attacker(s) targeted users with specific roles within Reddit. From the use of the term "code," I'm going to assume the target was developers or someone on the product side of Reddit. The attacker may have been an initial access broker – Despite the access gained that Reddit is making out to be not a big deal, they do also mention that no production systems were accessed. This makes me believe that this attack may have been focused on gaining a foothold within Reddit versus penetrating more sensitive systems and data. There are also a few takeaways from this attack that you can learn from: 2FA is an important security measure – Despite the fact that the threat actor collected and (I'm guessing) passed the credentials and 2FA details onto the legitimate Intranet gateway-a classic man-in-the Data Breach Hack Threat Guideline ChatGPT ★★
knowbe4.webp 2023-02-01 14:24:06 Artificial Intelligence, ChatGPT and Cybersecurity: A Match Made in Heaven or a Hack Waiting to Happen? (lien direct) Artificial Intelligence, ChatGPT and Cybersecurity: A Match Made in Heaven or a Hack Waiting to Happen? Hack ChatGPT ★★
Checkpoint.webp 2022-12-19 11:14:43 OpwnAI: AI That Can Save the Day or HACK it Away (lien direct) >Research by: Sharon Ben-Moshe, Gil Gekker, Golan Cohen Introduction Due to ChatGPT, OpenAI's release of the new interface for its Large Language Model (LLM), in the last few weeks there has been an explosion of interest in General AI in the media and on social networks. This model is used in many applications all over […] Hack ChatGPT ★★★
Last update at: 2024-05-08 23:08:04
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter