Src |
Date (GMT) |
Titre |
Description |
Tags |
Stories |
Notes |
 |
2025-04-11 13:00:00 |
11 bogues trouvés dans l'application Android de chatte de perplexité AI \\ 11 Bugs Found in Perplexity AI\\'s Chatbot Android App (lien direct) |
Les chercheurs caractérisent le chatbot de l'intelligence artificielle de l'entreprise comme moins sécurisée que Chatgpt et même Deepseek.
Researchers characterize the company\'s artificial intelligence chatbot as less secure than ChatGPT and even DeepSeek. |
Mobile
|
ChatGPT
|
★★★
|
 |
2025-03-07 09:00:00 |
PocketPal AI, l\'assistant IA 100% local sur Android / iOS (lien direct) |
Je suis un grand fan de LLM, et j’ai bien sûr les applications Claude d’Anthropic et ChatGPT installés sur mon ordinateur et mon smartphone. Mais ces outils ont leurs défaut. Déjà si vous n’avez pas de connexion internet, bah c’est mort ! Et ne parlons même pas de la confidentialité de vos conversations qui transitent par des serveurs distants… Heureusement, grâce à PocketPal AI, on va tous pouvoir discuter avec une IA directement depuis votre smartphone, 100% en local ! |
Tool
Mobile
|
ChatGPT
|
★★★
|
 |
2025-02-26 07:39:18 |
GPT 4.5 d'Openai \\ a été repéré dans Android Beta, lancez imminente OpenAI\\'s GPT 4.5 spotted in Android beta, launch imminent (lien direct) |
Le plus récent modèle d'Openai \\, GPT-4.5, arrive plus tôt que prévu. Une nouvelle référence a été repérée sur l'application Android de Chatgpt \\ qui pointe vers un modèle appelé "GPT-4.5 Research Preview", mais il semble que cela sera initialement limité à ceux qui ont un abonnement Pro. [...]
OpenAI\'s newest model, GPT-4.5, is coming sooner than we expected. A new reference has been spotted on ChatGPT\'s Android app that points to a model called "GPT-4.5 research preview," but it looks like it will initially be limited to those with a Pro subscription. [...] |
Mobile
|
ChatGPT
|
★★★
|
 |
2025-01-30 13:00:34 |
DeepSeek\'s Growing Influence Sparks a Surge in Frauds and Phishing Attacks (lien direct) |
Overview
DeepSeek is a Chinese artificial intelligence company that has developed open-source large language models (LLMs). In January 2025, DeepSeek launched its first free chatbot app, “DeepSeek - AI Assistant”, which rapidly became the most downloaded free app on the iOS App Store in the United States, surpassing even OpenAI\'s ChatGPT.
However, with rapid growth comes new risks-cybercriminals are exploiting DeepSeek\'s reputation through phishing campaigns, fake investment scams, and malware disguised as DeepSeek. This analysis seeks to explore recent incidents where Threat Actors (TAs) have impersonated DeepSeek to target users, highlighting their tactics and how readers can secure themselves accordingly.
Recently, Cyble Research and Intelligence Labs (CRIL) identified multiple suspicious websites impersonating DeepSeek. Many of these sites were linked to crypto phishing schemes and fraudulent investment scams. We have compiled a list of the identified suspicious sites:
abs-register[.]com
deep-whitelist[.]com
deepseek-ai[.]cloud
deepseek[.]boats
deepseek-shares[.]com
deepseek-aiassistant[.]com
usadeepseek[.]com
Campaign Details
Crypto phishing leveraging the popularity of DeepSeek
CRIL uncovered a crypto phishin |
Spam
Malware
Threat
Mobile
|
ChatGPT
|
★★★
|
 |
2024-10-01 14:13:00 |
Gemini Live est enfin disponible pour tous les téléphones Android - comment y accéder gratuitement Gemini Live is finally available for all Android phones - how to access it for free (lien direct) |
Vous voulez un assistant vocal avec lequel vous pouvez avoir des conversations naturelles?Vous n'aurez peut-être pas besoin de payer le mode vocal de Chatgpt \\.
Want a voice assistant you can have natural conversations with? You may not need to pay for ChatGPT\'s Voice Mode. |
Mobile
|
ChatGPT
|
★★
|
 |
2024-09-30 13:21:55 |
Faits saillants hebdomadaires OSINT, 30 septembre 2024 Weekly OSINT Highlights, 30 September 2024 (lien direct) |
## Snapshot
Last week\'s OSINT reporting highlighted diverse cyber threats involving advanced attack vectors and highly adaptive threat actors. Many reports centered on APT groups like Patchwork, Sparkling Pisces, and Transparent Tribe, which employed tactics such as DLL sideloading, keylogging, and API patching. The attack vectors ranged from phishing emails and malicious LNK files to sophisticated malware disguised as legitimate software like Google Chrome and Microsoft Teams. Threat actors targeted a variety of sectors, with particular focus on government entities in South Asia, organizations in the U.S., and individuals in India. These campaigns underscored the increased targeting of specific industries and regions, revealing the evolving techniques employed by cybercriminals to maintain persistence and evade detection.
## Description
1. [Twelve Group Targets Russian Government Organizations](https://sip.security.microsoft.com/intel-explorer/articles/5fd0ceda): Researchers at Kaspersky identified a threat group called Twelve, targeting Russian government organizations. Their activities appear motivated by hacktivism, utilizing tools such as Cobalt Strike and mimikatz while exfiltrating sensitive information and employing ransomware like LockBit 3.0. Twelve shares infrastructure and tactics with the DARKSTAR ransomware group.
2. [Kryptina Ransomware-as-a-Service Evolution](https://security.microsoft.com/intel-explorer/articles/2a16b748): Kryptina Ransomware-as-a-Service has evolved from a free tool to being actively used in enterprise attacks, particularly under the Mallox ransomware family, which is sometimes referred to as FARGO, XOLLAM, or BOZON. The commoditization of ransomware tools complicates malware tracking as affiliates blend different codebases into new variants, with Mallox operators opportunistically targeting \'timely\' vulnerabilities like MSSQL Server through brute force attacks for initial access.
3. [North Korean IT Workers Targeting Tech Sector:](https://sip.security.microsoft.com/intel-explorer/articles/bc485b8b) Mandiant reports on UNC5267, tracked by Microsoft as Storm-0287, a decentralized threat group of North Korean IT workers sent abroad to secure jobs with Western tech companies. These individuals disguise themselves as foreign nationals to generate revenue for the North Korean regime, aiming to evade sanctions and finance its weapons programs, while also posing significant risks of espionage and system disruption through elevated access.
4. [Necro Trojan Resurgence](https://sip.security.microsoft.com/intel-explorer/articles/00186f0c): Kaspersky\'s Secure List reveals the resurgence of the Necro Trojan, impacting both official and modified versions of popular applications like Spotify and Minecraft, and affecting over 11 million Android devices globally. Utilizing advanced techniques such as steganography to hide its payload, the malware allows attackers to run unauthorized ads, download files, and install additional malware, with recent attacks observed across countries like Russia, Brazil, and Vietnam.
5. [Android Spyware Campaign in South Korea:](https://sip.security.microsoft.com/intel-explorer/articles/e4645053) Cyble Research and Intelligence Labs (CRIL) uncovered a new Android spyware campaign targeting individuals in South Korea since June 2024, which disguises itself as legitimate apps and leverages Amazon AWS S3 buckets for exfiltration. The spyware effectively steals sensitive data such as SMS messages, contacts, images, and videos, while remaining undetected by major antivirus solutions.
6. [New Variant of RomCom Malware:](https://sip.security.microsoft.com/intel-explorer/articles/159819ae) Unit 42 researchers have identified "SnipBot," a new variant of the RomCom malware family, which utilizes advanced obfuscation methods and anti-sandbox techniques. Targeting sectors such as IT services, legal, and agriculture since at least 2022, the malware employs a multi-stage infection chain, and researchers suggest the threat actors\' motives might have s |
Ransomware
Malware
Tool
Vulnerability
Threat
Patching
Mobile
|
ChatGPT
APT 36
|
★★
|
 |
2024-09-19 12:30:00 |
Gemini Live frappe enfin les téléphones Android - comment y accéder gratuitement Gemini Live is finally hitting Android phones - how to access it for free (lien direct) |
Vous voulez un assistant vocal avec lequel vous pouvez avoir des conversations naturelles?Vous n'aurez peut-être pas besoin d'attendre le mode vocal de Chatgpt \\.
Want a voice assistant you can have natural conversations with? You may not need to wait for ChatGPT\'s Voice Mode. |
Mobile
|
ChatGPT
|
★
|
 |
2024-03-07 11:00:00 |
Sécuriser l'IA Securing AI (lien direct) |
With the proliferation of AI/ML enabled technologies to deliver business value, the need to protect data privacy and secure AI/ML applications from security risks is paramount. An AI governance framework model like the NIST AI RMF to enable business innovation and manage risk is just as important as adopting guidelines to secure AI. Responsible AI starts with securing AI by design and securing AI with Zero Trust architecture principles.
Vulnerabilities in ChatGPT
A recent discovered vulnerability found in version gpt-3.5-turbo exposed identifiable information. The vulnerability was reported in the news late November 2023. By repeating a particular word continuously to the chatbot it triggered the vulnerability. A group of security researchers with Google DeepMind, Cornell University, CMU, UC Berkeley, ETH Zurich, and the University of Washington studied the “extractable memorization” of training data that an adversary can extract by querying a ML model without prior knowledge of the training dataset.
The researchers’ report show an adversary can extract gigabytes of training data from open-source language models. In the vulnerability testing, a new developed divergence attack on the aligned ChatGPT caused the model to emit training data 150 times higher. Findings show larger and more capable LLMs are more vulnerable to data extraction attacks, emitting more memorized training data as the volume gets larger. While similar attacks have been documented with unaligned models, the new ChatGPT vulnerability exposed a successful attack on LLM models typically built with strict guardrails found in aligned models.
This raises questions about best practices and methods in how AI systems could better secure LLM models, build training data that is reliable and trustworthy, and protect privacy.
U.S. and UK’s Bilateral cybersecurity effort on securing AI
The US Cybersecurity Infrastructure and Security Agency (CISA) and UK’s National Cyber Security Center (NCSC) in cooperation with 21 agencies and ministries from 18 other countries are supporting the first global guidelines for AI security. The new UK-led guidelines for securing AI as part of the U.S. and UK’s bilateral cybersecurity effort was announced at the end of November 2023.
The pledge is an acknowledgement of AI risk by nation leaders and government agencies worldwide and is the beginning of international collaboration to ensure the safety and security of AI by design. The Department of Homeland Security (DHS) CISA and UK NCSC joint guidelines for Secure AI system Development aims to ensure cybersecurity decisions are embedded at every stage of the AI development lifecycle from the start and throughout, and not as an afterthought.
Securing AI by design
Securing AI by design is a key approach to mitigate cybersecurity risks and other vulnerabilities in AI systems. Ensuring the entire AI system development lifecycle process is secure from design to development, deployment, and operations and maintenance is critical to an organization realizing its full benefits. The guidelines documented in the Guidelines for Secure AI System Development aligns closely to software development life cycle practices defined in the NSCS’s Secure development and deployment guidance and the National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF).
The 4 pillars that embody the Guidelines for Secure AI System Development offers guidance for AI providers of any systems whether newly created from the ground up or built on top of tools and services provided from |
Tool
Vulnerability
Threat
Mobile
Medical
Cloud
Technical
|
ChatGPT
|
★★
|
 |
2024-01-31 10:30:00 |
ESET Research Podcast: Chatgpt, The Moveit Hack et Pandora ESET Research Podcast: ChatGPT, the MOVEit hack, and Pandora (lien direct) |
Un chatbot AI Kindle par inadvertance un boom de la cybercriminalité, des bandits de ransomware pluncent des organisations sans déploiement
An AI chatbot inadvertently kindles a cybercrime boom, ransomware bandits plunder organizations without deploying ransomware, and a new botnet enslaves Android TV boxes |
Ransomware
Hack
Mobile
|
ChatGPT
|
★★★
|
 |
2024-01-05 21:36:17 |
Les utilisateurs d'Android pourraient bientôt remplacer Google Assistant par Chatgpt Android users could soon replace Google Assistant with ChatGPT (lien direct) |
L'application Android Chatgpt travaille sur la prise en charge des API assistantes d'Android \\.
The Android ChatGPT app is working on support for Android\'s assistant APIs. |
Mobile
|
ChatGPT
|
★★
|
 |
2023-12-22 22:47:44 |
Rapport de menace ESET: abus de nom de chatppt, Lumma Steal Maleware augmente, la prévalence de Spyware \\ Android Spinok SDK \\ ESET Threat Report: ChatGPT Name Abuses, Lumma Stealer Malware Increases, Android SpinOk SDK Spyware\\'s Prevalence (lien direct) |
Des conseils d'atténuation des risques sont fournis pour chacune de ces menaces de cybersécurité.
Risk mitigation tips are provided for each of these cybersecurity threats. |
Malware
Threat
Mobile
|
ChatGPT
|
★★★
|
 |
2023-11-28 23:05:04 |
Prédictions 2024 de Proofpoint \\: Brace for Impact Proofpoint\\'s 2024 Predictions: Brace for Impact (lien direct) |
In the ever-evolving landscape of cybersecurity, defenders find themselves navigating yet another challenging year. Threat actors persistently refine their tactics, techniques, and procedures (TTPs), showcasing adaptability and the rapid iteration of novel and complex attack chains. At the heart of this evolution lies a crucial shift: threat actors now prioritize identity over technology. While the specifics of TTPs and the targeted technology may change, one constant remains: humans and their identities are the most targeted links in the attack chain.
Recent instances of supply chain attacks exemplify this shift, illustrating how adversaries have pivoted from exploiting software vulnerabilities to targeting human vulnerabilities through social engineering and phishing. Notably, the innovative use of generative AI, especially its ability to improve phishing emails, exemplifies a shift towards manipulating human behavior rather than exploiting technological weaknesses.
As we reflect on 2023, it becomes evident that cyber threat actors possess the capabilities and resources to adapt their tactics in response to increased security measures such as multi-factor authentication (MFA). Looking ahead to 2024, the trend suggests that threats will persistently revolve around humans, compelling defenders to take a different approach to breaking the attack chain.
So, what\'s on the horizon?
The experts at Proofpoint provide insightful predictions for the next 12 months, shedding light on what security teams might encounter and the implications of these trends.
1. Cyber Heists: Casinos are Just the Tip of the Iceberg
Cyber criminals are increasingly targeting digital supply chain vendors, with a heightened focus on security and identity providers. Aggressive social engineering tactics, including phishing campaigns, are becoming more prevalent. The Scattered Spider group, responsible for ransomware attacks on Las Vegas casinos, showcases the sophistication of these tactics. Phishing help desk employees for login credentials and bypassing MFA through phishing one-time password (OTP) codes are becoming standard practices. These tactics have extended to supply chain attacks, compromising identity provider (IDP) vendors to access valuable customer information. The forecast for 2024 includes the replication and widespread adoption of such aggressive social engineering tactics, broadening the scope of initial compromise attempts beyond the traditional edge device and file transfer appliances.
2. Generative AI: The Double-Edged Sword
The explosive growth of generative AI tools like ChatGPT, FraudGPT and WormGPT bring both promise and peril, but the sky is not falling as far as cybersecurity is concerned. While large language models took the stage, the fear of misuse prompted the U.S. president to issue an executive order in October 2023. At the moment, threat actors are making bank doing other things. Why bother reinventing the model when it\'s working just fine? But they\'ll morph their TTPs when detection starts to improve in those areas.
On the flip side, more vendors will start injecting AI and large language models into their products and processes to boost their security offerings. Across the globe, privacy watchdogs and customers alike will demand responsible AI policies from technology companies, which means we\'ll start seeing statements being published about responsible AI policies. Expect both spectacular failures and responsible AI policies to emerge.
3. Mobile Device Phishing: The Rise of Omni-Channel Tactics take Centre Stage
A notable trend for 2023 was the dramatic increase in mobile device phishing and we expect this threat to rise even more in 2024. Threat actors are strategically redirecting victims to mobile interactions, exploiting the vulnerabilities inherent in mobile platforms. Conversational abuse, including conversational smishing, has experienced exponential growth. Multi-touch campaigns aim to lure users away from desktops to mobile devices, utilizing tactics like QR codes and fraudulent voice calls |
Ransomware
Malware
Tool
Vulnerability
Threat
Mobile
Prediction
Prediction
|
ChatGPT
ChatGPT
|
★★★
|
 |
2023-11-22 18:39:21 |
«Chatgpt with Voice» s'ouvre à tout le monde sur iOS et Android “ChatGPT with voice” opens up to everyone on iOS and Android (lien direct) |
Tous les utilisateurs d'Android et iOS peuvent bientôt appuyer sur une icône de casque et commencer à discuter.
All Android and iOS users can soon tap a headphone icon and start chatting. |
Mobile
|
ChatGPT
|
★★
|