What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
GoogleSec.webp 2023-06-23 12:03:59 Sécurité de la chaîne d'approvisionnement pour GO, partie 2: dépendances compromises
Supply chain security for Go, Part 2: Compromised dependencies
(lien direct)
Julie Qiu, Go Security & Reliability, and Roger Ng, Google Open Source Security Team“Secure your dependencies”-it\'s the new supply chain mantra. With attacks targeting software supply chains sharply rising, open source developers need to monitor and judge the risks of the projects they rely on. Our previous installment of the Supply chain security for Go series shared the ecosystem tools available to Go developers to manage their dependencies and vulnerabilities. This second installment describes the ways that Go helps you trust the integrity of a Go package. Go has built-in protections against three major ways packages can be compromised before reaching you: A new, malicious version of your dependency is publishedA package is withdrawn from the ecosystemA malicious file is substituted for a currently used version of your dependencyIn this blog post we look at real-world scenarios of each situation and show how Go helps protect you from similar attac ★★
GoogleSec.webp 2023-06-22 12:05:42 Google Cloud attribue 313 337 $ en 2022 Prix VRP
Google Cloud Awards $313,337 in 2022 VRP Prizes
(lien direct)
Anthony Weems, Information Security Engineer2022 was a successful year for Google\'s Vulnerability Reward Programs (VRPs), with over 2,900 security issues identified and fixed, and over $12 million in bounty rewards awarded to researchers. A significant amount of these vulnerability reports helped improve the security of Google Cloud products, which in turn helps improve security for our users, customers, and the Internet at large.We first announced the Google Cloud VRP Prize in 2019 to encourage security researchers to focus on the security of Google Cloud and to incentivize sharing knowledge on Cloud vulnerability research with the world. This year, we were excited to see an increase in collaboration between researchers, which often led to more detailed and complex vulnerability reports. After careful evaluation of the submissions, today we are excited to announce the winners of the 2022 Google Cloud VRP Prize.2022 Google Cloud VRP Prize Winners1st Prize - $133,337: Yuval Avrahami for the report and write-up Privilege escalations in GKE Autopilot. Yuval\'s excellent write-up describes several attack paths that would allow an attacker with permission to create pods in an Autopilot cluster to escalate privileges and compromise the underlying node VMs. While thes Vulnerability Cloud Uber ★★
GoogleSec.webp 2023-06-20 11:58:57 Protégez et gérez les extensions du navigateur à l'aide de la gestion du cloud Chrome Browser
Protect and manage browser extensions using Chrome Browser Cloud Management
(lien direct)
Posted by Anuj Goyal, Product Manager, Chrome Browser Browser extensions, while offering valuable functionalities, can seem risky to organizations. One major concern is the potential for security vulnerabilities. Poorly designed or malicious extensions could compromise data integrity and expose sensitive information to unauthorized access. Moreover, certain extensions may introduce performance issues or conflicts with other software, leading to system instability. Therefore, many organizations find it crucial to have visibility into the usage of extensions and the ability to control them. Chrome browser offers these extension management capabilities and reporting via Chrome Browser Cloud Management. In this blog post, we will walk you through how to utilize these features to keep your data and users safe. Visibility into Extensions being used in your environment Having visibility into what and how extensions are being used enables IT and security teams to assess potential security implications, ensure compliance with organizational policies, and mitigate potential risks. There are three ways you can get critical information about extensions in your organization:1. App and extension usage reportingOrganizations can gain visibility into every Chrome extension that is installed across an enterprise\'s fleet in Chrome App and Extension Usage Reporting.2. Extension Risk AssessmentCRXcavator and Spin.AI Risk Assessment are tools used to assess the risks of browser extensions and minimize the risks associated with them. We are making extension scores via these two platforms available directly in Chrome Browser Cloud Management, so security teams can have an at-a-glance view of risk scores of the extensions being used in their browser environment. 3. Extension event reportingExtension installs events are now available to alert IT and security teams of new extension usage in their environment.Organizations can send critical browser security events to their chosen solution providers, such as Splunk, Crowdstrike, Palo Alto Networks, and Google solutions, including Chronicle, Cloud PubSub, and Google Workspace, for further analysis. You can also view the event logs directly in Chrome Browser Cloud Management. Cloud ★★★
GoogleSec.webp 2023-06-16 13:11:38 Apporter la transparence à l'informatique confidentielle avec SLSA
Bringing Transparency to Confidential Computing with SLSA
(lien direct)
Asra Ali, Razieh Behjati, Tiziano Santoro, Software EngineersEvery day, personal data, such as location information, images, or text queries are passed between your device and remote, cloud-based services. Your data is encrypted when in transit and at rest, but as potential attack vectors grow more sophisticated, data must also be protected during use by the service, especially for software systems that handle personally identifiable user data.Toward this goal, Google\'s Project Oak is a research effort that relies on the confidential computing paradigm to build an infrastructure for processing sensitive user data in a secure and privacy-preserving way: we ensure data is protected during transit, at rest, and while in use. As an assurance that the user data is in fact protected, we\'ve open sourced Project Oak code, and have introduced a transparent release process to provide publicly inspectable evidence that the application was built from that source code. This blog post introduces Oak\'s transparent release process, which relies on the SLSA framework to generate cryptographic proof of the origin of Oak\'s confidential computing stack, and together with Oak\'s remote attestation process, allows users to cryptographically verify that their personal data was processed by a trustworthy application in a secure environment.  Tool ★★
GoogleSec.webp 2023-06-14 11:59:49 Apprentissage de KCTF VRP \\'s 42 Linux Neule exploite les soumissions
Learnings from kCTF VRP\\'s 42 Linux kernel exploits submissions
(lien direct)
Tamás Koczka, Security EngineerIn 2020, we integrated kCTF into Google\'s Vulnerability Rewards Program (VRP) to support researchers evaluating the security of Google Kubernetes Engine (GKE) and the underlying Linux kernel. As the Linux kernel is a key component not just for Google, but for the Internet, we started heavily investing in this area. We extended the VRP\'s scope and maximum reward in 2021 (to $50k), then again in February 2022 (to $91k), and finally in August 2022 (to $133k). In 2022, we also summarized our learnings to date in our cookbook, and introduced our experimental mitigations for the most common exploitation techniques.In this post, we\'d like to share our learnings and statistics about the latest Linux kernel exploit submissions, how effective our Vulnerability Uber ★★
GoogleSec.webp 2023-06-01 11:59:52 Annonce du bonus d'exploitation en pleine chaîne du navigateur Chrome
Announcing the Chrome Browser Full Chain Exploit Bonus
(lien direct)
Amy Ressler, Chrome Security Team au nom du Chrome VRP Depuis 13 ans, un pilier clé de l'écosystème de sécurité chromée a inclus des chercheurs en sécurité à trouver des vulnérabilités de sécurité dans Chrome Browser et à nous les signaler, à travers le Programme de récompenses de vulnérabilité Chrome . À partir d'aujourd'hui et jusqu'au 1er décembre 2023, le premier rapport de bogue de sécurité que nous recevons avec un exploit fonctionnel de la chaîne complète, résultant en une évasion chromée de sable, est éligible à triple le montant de la récompense complet .Votre exploit en pleine chaîne pourrait entraîner une récompense pouvant atteindre 180 000 $ (potentiellement plus avec d'autres bonus). Toutes les chaînes complètes ultérieures soumises pendant cette période sont éligibles pour doubler le montant de récompense complet ! Nous avons historiquement mis une prime sur les rapports avec les exploits & # 8211;«Des rapports de haute qualité avec un exploit fonctionnel» est le niveau le plus élevé de montants de récompense dans notre programme de récompenses de vulnérabilité.Au fil des ans, le modèle de menace de Chrome Browser a évolué à mesure que les fonctionnalités ont mûri et de nouvelles fonctionnalités et de nouvelles atténuations, tels a miracleptr , ont été introduits.Compte tenu de ces évolutions, nous sommes toujours intéressés par les explorations d'approches nouvelles et nouvelles pour exploiter pleinement le navigateur Chrome et nous voulons offrir des opportunités pour mieux inciter ce type de recherche.Ces exploits nous fournissent un aperçu précieux des vecteurs d'attaque potentiels pour exploiter Chrome et nous permettent d'identifier des stratégies pour un meilleur durcissement des caractéristiques et des idées de chrome spécifiques pour de futures stratégies d'atténuation à grande échelle. Les détails complets de cette opportunité de bonus sont disponibles sur le Chrome VRP Rules and Rewards page .Le résumé est le suivant: Les rapports de bogues peuvent être soumis à l'avance pendant que le développement de l'exploitation se poursuit au cours de cette fenêtre de 180 jours.Les exploits fonctionnels doivent être soumis à Chrome à la fin de la fenêtre de 180 jours pour être éligible à la triple ou double récompense. Le premier exploit fonctionnel de la chaîne complète que nous recevons est éligible au triple de récompense. L'exploit en chaîne complète doit entraîner une évasion de bac à sable de navigateur Chrome, avec une démonstration de contrôle / exécution de code de l'attaquant en dehors du bac à sable. L'exploitation doit pouvoir être effectuée à distance et aucune dépendance ou très limitée à l'interaction utilisateur. L'exploit doit avoir été fonctionnel dans un canal de libération actif de Chrome (Dev, Beta, stable, étendu stable) au moment des rapports initiaux des bogues dans cette chaîne.Veuillez ne pas soumettre des exploits développés à partir de bogues de sécurité divulgués publiquement ou d'autres artefacts dans les anciennes versions passées de Chrome. Comme cela est conforme à notre politique générale de récompenses, si l'exploit permet l'exécution du code distant (RCE) dans le navigateur ou un autre processus hautement privilégié, tel que le processus de réseau ou de GPU, pour entraîner une évasion de bac à sable sans avoir besoin d'une première étapeBug, le montant de récompense pour le rendu «rapport de haute qualité avec exploit fonctionnel» serait accordé et inclus dans le calcul du total de récompense de bonus. Sur la base de notre Vulnerability Threat ★★★
GoogleSec.webp 2023-05-31 12:00:25 Ajout d'actions de correction de la gestion du nuage de navigateur Chrome dans Splunk en utilisant des actions d'alerte
Adding Chrome Browser Cloud Management remediation actions in Splunk using Alert Actions
(lien direct)
Posted by Ashish Pujari, Chrome Security Team Introduction Chrome is trusted by millions of business users as a secure enterprise browser. Organizations can use Chrome Browser Cloud Management to help manage Chrome browsers more effectively. As an admin, they can use the Google Admin console to get Chrome to report critical security events to third-party service providers such as Splunk® to create custom enterprise security remediation workflows. Security remediation is the process of responding to security events that have been triggered by a system or a user. Remediation can be done manually or automatically, and it is an important part of an enterprise security program. Why is Automated Security Remediation Important? When a security event is identified, it is imperative to respond as soon as possible to prevent data exfiltration and to prevent the attacker from gaining a foothold in the enterprise. Organizations with mature security processes utilize automated remediation to improve the security posture by reducing the time it takes to respond to security events. This allows the usually over burdened Security Operations Center (SOC) teams to avoid alert fatigue. Automated Security Remediation using Chrome Browser Cloud Management and Splunk Chrome integrates with Chrome Enterprise Recommended partners such as Splunk® using Chrome Enterprise Connectors to report security events such as malware transfer, unsafe site visits, password reuse. Other supported events can be found on our support page. The Splunk integration with Chrome browser allows organizations to collect, analyze, and extract insights from security events. The extended security insights into managed browsers will enable SOC teams to perform better informed automated security remediations using Splunk® Alert Actions. Splunk Alert Actions are a great capability for automating security remediation tasks. By creating alert actions, enterprises can automate the process of identifying, prioritizing, and remediating security threats. In Splunk®, SOC teams can use alerts to monitor for and respond to specific Chrome Browser Cloud Management events. Alerts use a saved search to look for events in real time or on a schedule and can trigger an Alert Action when search results meet specific conditions as outlined in the diagram below. Use Case If a user downloads a malicious file after bypassing a Chrome “Dangerous File” message their managed browser/managed CrOS device should be quarantined. Prerequisites Create a Chrome Browser Cloud Management account at no additional costs Malware Cloud ★★
GoogleSec.webp 2023-05-26 14:06:55 Time to challenge yourself in the 2023 Google CTF! (lien direct) Vincent Winstead, Technical Program ManagerIt\'s Google CTF time! Get your hacking toolbox ready and prepare your caffeine for rapid intake. The competition kicks off on June 23 2023 6:00 PM UTC and runs through June 25 2023 6:00 PM UTC. Registration is now open at g.co/ctf.Google CTF gives you a chance to challenge your skillz, show off your hacktastic abilities, and learn some new tricks along the way. It consists of a set of computer security puzzles (or challenges) involving reverse-engineering, memory corruption, cryptography, web technologies, and more. Use obscure security knowledge to find exploits through bugs and creative misuse. With each completed challenge your team will earn points and move up through the ranks. The top 8 teams will qualify for our Hackceler8 competition taking place in Tokyo later this year. Hackceler8 is our experimental esport-style hacking game, custom-made to mix CTF and speedrunning. In the competition ★★
GoogleSec.webp 2023-05-26 12:02:28 Il est temps de vous mettre au défi dans le Google CTF 2023!
Time to challenge yourself in the 2023 Google CTF!
(lien direct)
Vincent Winstead, Technical Program ManagerIt\'s Google CTF time! Get your hacking toolbox ready and prepare your caffeine for rapid intake. The competition kicks off on June 23 2023 6:00 PM UTC and runs through June 25 2023 6:00 PM UTC. Registration is now open at g.co/ctf.Google CTF gives you a chance to challenge your skillz, show off your hacktastic abilities, and learn some new tricks along the way. It consists of a set of computer security puzzles (or challenges) involving reverse-engineering, memory corruption, cryptography, web technologies, and more. Use obscure security knowledge to find exploits through bugs and creative misuse. With each completed challenge your team will earn points and move up through the ranks. The top 8 teams will qualify for our Hackceler8 competition taking place in Tokyo later this year. Hackceler8 is our experimental esport-style hacking game, custom-made to mix CTF and speedrunning. In the competition, teams need to find clever ways to abuse the game features to capture flags as quickly as possible. See the 2022 highlight reel to get a sense of what it\'s like. The prize pool for this year\'s event stands at more than $32,000! Conference ★★★★
GoogleSec.webp 2023-05-25 12:00:55 API de services Google Trust ACME disponibles pour tous les utilisateurs sans frais
Google Trust Services ACME API available to all users at no cost
(lien direct)
David Kluge, Technical Program Manager, and Andy Warner, Product ManagerNobody likes preventable site errors, but they happen disappointingly often. The last thing you want your customers to see is a dreaded \'Your connection is not private\' error instead of the service they expected to reach. Most certificate errors are preventable and one of the best ways to help prevent issues is by automating your certificate lifecycle using the ACME standard. Google Trust Services now offers our ACME API to all users with a Google Cloud account (referred to as “users” here), allowing them to automatically acquire and renew publicly-trusted TLS certificates for free. The ACME API has been available as a preview and over 200 million certificates have been issued already, offering the same compatibility as major Google services like google.com or youtube.com. Tool Cloud ★★★
GoogleSec.webp 2023-05-24 12:49:28 Annonçant le lancement de Guac V0.1
Announcing the launch of GUAC v0.1
(lien direct)
Brandon Lum and Mihai Maruseac, Google Open Source Security TeamToday, we are announcing the launch of the v0.1 version of Graph for Understanding Artifact Composition (GUAC). Introduced at Kubecon 2022 in October, GUAC targets a critical need in the software industry to understand the software supply chain. In collaboration with Kusari, Purdue University, Citi, and community members, we have incorporated feedback from our early testers to improve GUAC and make it more useful for security professionals. This improved version is now available as an API for you to start developing on top of, and integrating into, your systems.The need for GUACHigh-profile incidents such as Solarwinds, and the recent 3CX supply chain double-exposure, are evidence that supply chain attacks are getting more sophisticated. As highlighted by the Tool Vulnerability Threat Yahoo ★★
GoogleSec.webp 2023-05-23 12:01:36 Comment le programme Chrome Root protège les utilisateurs
How the Chrome Root Program Keeps Users Safe
(lien direct)
Posted by Chrome Root Program, Chrome Security Team What is the Chrome Root Program? A root program is one of the foundations for securing connections to websites. The Chrome Root Program was announced in September 2022. If you missed it, don\'t worry - we\'ll give you a quick summary below! Chrome Root Program: TL;DR Chrome uses digital certificates (often referred to as “certificates,” “HTTPS certificates,” or “server authentication certificates”) to ensure the connections it makes for its users are secure and private. Certificates are issued by trusted entities called “Certification Authorities” (CAs). The collection of digital certificates, CA systems, and other related online servicews is the foundation of HTTPS and is often referred to as the “Web PKI.” Before issuing a certificate to a website, the CA must verify that the certificate requestor legitimately controls the domain whose name will be represented in the certificate. This process is often referred to as “domain validation” and there are several methods that can be used. For example, a CA can specify a random value to be placed on a website, and then perform a check to verify the value\'s presence. Typically, domain validation practices must conform with a set of security requirements described in both industry-wide and browser-specific policies, like the CA/Browser Forum “Baseline Requirements” and the Chrome Root Program policy. Upon connecting to a website, Chrome verifies that a recognized (i.e., trusted) CA issued its certificate, while also performing additional evaluations of the connection\'s security properties (e.g., validating data from Certificate Transparency logs). Once Chrome determines that the certificate is valid, Chrome can use it to establish an encrypted connection to the website. Encrypted connections prevent attackers from being able to intercept (i.e., eavesdrop) or modify communication. In security speak, this is known as confidentiality and integrity. The Chrome Root Program, led by members of the Chrome Security team, provides governance and security review to determine the set of CAs trusted by default in Chrome. This set of so-called "root certificates" is known at the Chrome Root Store. How does the Chrome Root Program keep users safe? The Chrome Root Program keeps users safe by ensuring the CAs Chrome trusts to validate domains are worthy of that trust. We do that by: administering policy and governance activities to manage the set of CAs trusted by default in Chrome, evaluating impact and corresponding security implications related to public security incident disclosures by participating CAs, and leading positive change to make the ecosystem more resilient. Policy and Governance The Chrome Root Program policy defines the minimum requirements a CA owner must meet for inclusion in the Chrome Root Store. It incorporates the industry-wide CA/Browser Forum Baseline Requirements and further adds security controls to improve Chrome user security. The CA ★★
GoogleSec.webp 2023-05-17 11:59:38 Nouvelles initiatives du programme de récompense de vulnérabilité de l'appareil Android et Google
New Android & Google Device Vulnerability Reward Program Initiatives
(lien direct)
Publié par Sarah Jacobus, Vulnérabilité Rewards Team Alors que la technologie continue d'avancer, les efforts des cybercriminels qui cherchent à exploiter les vulnérabilités dans les logiciels et les appareils.C'est pourquoi chez Google et Android, la sécurité est une priorité absolue, et nous travaillons constamment pour rendre nos produits plus sécurisés.Une façon dont nous le faisons est grâce à nos programmes de récompense de vulnérabilité (VRP), qui incitent les chercheurs en sécurité à trouver et à signaler les vulnérabilités dans notre système d'exploitation et nos appareils. Nous sommes heureux d'annoncer que nous mettons en œuvre un nouveau système de notation de qualité pour les rapports de vulnérabilité de sécurité pour encourager davantage de recherches sur la sécurité dans les domaines à impact plus élevé de nos produits et assurer la sécurité de nos utilisateurs.Ce système évaluera les rapports de vulnérabilité comme une qualité élevée, moyenne ou faible en fonction du niveau de détail fourni dans le rapport.Nous pensons que ce nouveau système encouragera les chercheurs à fournir des rapports plus détaillés, ce qui nous aidera à résoudre les problèmes rapportés plus rapidement et à permettre aux chercheurs de recevoir des récompenses de primes plus élevées. Les vulnérabilités de la plus haute qualité et les plus critiques sont désormais éligibles à des récompenses plus importantes allant jusqu'à 15 000 $! Il y a quelques éléments clés que nous recherchons: Description précise et détaillée : Un rapport doit décrire clairement et avec précision la vulnérabilité, y compris le nom et la version de l'appareil.La description doit être suffisamment détaillée pour comprendre facilement le problème et commencer à travailler sur un correctif. Analyse des causes racines : Un rapport doit inclure une analyse complète des causes profondes qui décrit pourquoi le problème se produit et quel code source Android doit être corrigé pour le résoudre.Cette analyse doit être approfondie et fournir suffisamment d'informations pour comprendre la cause sous-jacente de la vulnérabilité. preuve de concept : Un rapport doit inclure une preuve de concept qui démontre efficacement la vulnérabilité.Cela peut inclure des enregistrements vidéo, une sortie de débogueur ou d'autres informations pertinentes.La preuve de concept doit être de haute qualité et inclure la quantité minimale de code possible pour démontrer le problème. Reproductibilité : Un rapport doit inclure une explication étape par étape de la façon de reproduire la vulnérabilité sur un appareil éligible exécutant la dernière version.Ces informations doivent être claires et concises et devraient permettre à nos ingénieurs de reproduire facilement le problème et de commencer à travailler sur une correction. preuve d'accès à l'accouchement : Enfin, un rapport doit inclure des preuves ou une analyse qui démontrent le type de problème et le niveau d'accès ou d'exécution atteint. * Remarque: ces critères peuvent changer avec le temps.Pour les informations les plus récentes, veuillez vous référer à notre Page des règles publiques . De plus, à partir du 15 mars 2023, Android n'attribuera plus de vulnérabilités et d'expositions courantes (CVE) à la plupart des problèmes de gravité modérés.Le CVE continuera d'être affecté à des vulnérabilités critiques et à forte gravité. Nous pensons que l'incitation aux chercheurs à fournir des rapports de haute qualité profitera à la fois à la communauté de sécurité plus large et à notre Vulnerability ★★
GoogleSec.webp 2023-05-15 13:35:50 22 000 $ décerné à SBFT \\ '23 gagnants du concours de fuzzing
$22k awarded to SBFT \\'23 fuzzing competition winners
(lien direct)
Dongge Liu, Jonathan Metzman and Oliver Chang, Google Open Source Security TeamGoogle\'s Open Source Security Team recently sponsored a fuzzing competition as part of ISCE\'s Search-Based and Fuzz Testing (SBFT) Workshop. Our goal was to encourage the development of new fuzzing techniques, which can lead to the discovery of software vulnerabilities and ultimately a safer open source ecosystem. The competitors\' fuzzers were judged on code coverage and their ability to discover bugs: HasteFuzz took the $11,337 prize for code coveragePASTIS and AFLrustrust tied for bug discovery and split the $11,337 prizeCompetitors were evaluated using Conference ★★
GoogleSec.webp 2023-05-11 12:44:52 Présentation d'une nouvelle façon de bourdonner pour les vulnérabilités EBPF
Introducing a new way to buzz for eBPF vulnerabilities
(lien direct)
Juan José López Jaimez, Security Researcher and Meador Inge, Security EngineerToday, we are announcing Buzzer, a new eBPF Fuzzing framework that aims to help hardening the Linux Kernel.What is eBPF and how does it verify safety?eBPF is a technology that allows developers and sysadmins to easily run programs in a privileged context, like an operating system kernel. Recently, its popularity has increased, with more products adopting it as, for example, a network filtering solution. At the same time, it has maintained its relevance in the security research community, since it provides a powerful attack surface into the operating system.While there are many solutions for fuzzing vulnerabilities in the Linux Kernel, they are not necessarily tailored to the unique features of eBPF. In particular, eBPF has many complex security rules that programs must follow to be considered valid and safe. These rules are enforced by a component of eBPF referred to as the "verifier". The correctness properties of the verifier implementation have proven difficult to understand by reading the source code alone. That\'s why our security team at Google decided to create a new fuzzer framework that aims to test the limits of the eBPF verifier through generating eBPF programs.The eBPF verifier\'s main goal is to make sure that a program satisfies a certain set of safety rules, for example: programs should not be able to write outside designated memory regions, certain arithmetic operations should be restricted on pointers, and so on. However, like all pieces of software, there can be holes in the logic of these checks. This could potentially cause unsafe behavior of an eBPF program and have security implications. ★★
GoogleSec.webp 2023-05-10 14:59:36 E / S 2023: Ce qui est nouveau dans la sécurité et la confidentialité d'Android
I/O 2023: What\\'s new in Android security and privacy
(lien direct)
Posted by Ronnie Falcon, Product Manager Android is built with multiple layers of security and privacy protections to help keep you, your devices, and your data safe. Most importantly, we are committed to transparency, so you can see your device safety status and know how your data is being used. Android uses the best of Google\'s AI and machine learning expertise to proactively protect you and help keep you out of harm\'s way. We also empower you with tools that help you take control of your privacy. I/O is a great moment to show how we bring these features and protections all together to help you stay safe from threats like phishing attacks and password theft, while remaining in charge of your personal data. Safe Browsing: faster more intelligent protection Android uses Safe Browsing to protect billions of users from web-based threats, like deceptive phishing sites. This happens in the Chrome default browser and also in Android WebView, when you open web content from apps. Safe Browsing is getting a big upgrade with a new real-time API that helps ensure you\'re warned about fast-emerging malicious sites. With the newest version of Safe Browsing, devices will do real-time blocklist checks for low reputation sites. Our internal analysis has found that a significant number of phishing sites only exist for less than ten minutes to try and stay ahead of block-lists. With this real-time detection, we expect we\'ll be able to block an additional 25 percent of phishing attempts every month in Chrome and Android1. Safe Browsing isn\'t just getting faster at warning users. We\'ve also been building in more intelligence, leveraging Google\'s advances in AI. Last year, Chrome browser on Android and desktop started utilizing a new image-based phishing detection machine learning model to visually inspect fake sites that try to pass themselves off as legitimate log-in pages. By leveraging a TensorFlow Lite model, we\'re able to find 3x more2 phishing sites compared to previous machine learning models and help warn you before you get tricked into signing in. This year, we\'re expanding the coverage of the model to detect hundreds of more phishing campaigns and leverage new ML technologies. This is just one example of how we use our AI expertise to keep your data safe. Last year, Android used AI to protect users from 100 billion suspected spam messages and calls.3 Passkeys helps move users beyond passwords For many, passwords are the primary protection for their online life. In reality, they are frustrating to create, remember and are easily hacked. But hackers can\'t phish a password that doesn\'t exist. Which is why we are excited to share another major step forward in our passwordless journey: Passkeys. Spam Malware Tool ★★★
GoogleSec.webp 2023-05-05 12:00:43 Faire l'authentification plus rapidement que jamais: Passkeys vs mots de passe
Making authentication faster than ever: passkeys vs. passwords
(lien direct)
Silvia Convento, Senior UX Researcher and Court Jacinic, Senior UX Content DesignerIn recognition of World Password Day 2023, Google announced its next step toward a passwordless future: passkeys. Passkeys are a new, passwordless authentication method that offer a convenient authentication experience for sites and apps, using just a fingerprint, face scan or other screen lock. They are designed to enhance online security for users. Because they are based on the public key cryptographic protocols that underpin security keys, they are resistant to phishing and other online attacks, making them more secure than SMS, app based one-time passwords and other forms of multi-factor authentication (MFA). And since passkeys are standardized, a single implementation enables a passwordless experience across browsers and operating systems. Passkeys can be used in two different ways: on the same device or from a different device. For example, if you need to sign in to a website on an Android device and you have a passkey stored on that same device, then using it only involves unlocking the phone. On the other hand, if you need to sign in to that website on the Chrome browser on your computer, you simply scan a QR code to connect the phone and computer to use the passkey.The technology behind the former (“same device passkey”) is not new: it was originally developed within the FIDO Alliance and first implemented by Google in August 2019 in select flows. Google and other FIDO members have been working together on enhancing the underlying technology of passkeys over the last few years to improve their usability and convenience. This technology behind passkeys allows users to log in to their account using any form of device-based user verification, such as biometrics or a PIN code. A credential is only registered once on a user\'s personal device, and then the device proves possession of the registered credential to the remote server by asking the user to use their device\'s screen lock. The user\'s biometric, or other screen lock data, is never sent to Google\'s servers - it stays securely stored on the device, and only cryptographic proof that the user has correctly provided it is sent to Google. Passkeys are also created and stored on your devices and are not sent to websites or apps. If you create a passkey on one device the Google Password Manager can make it available on your other devices that are signed into the same system account.Learn more on how passkey works under the hoo APT 38 APT 15 APT 10 Guam ★★
GoogleSec.webp 2023-05-05 11:02:09 Présentation des règles_oci
Introducing rules_oci
(lien direct)
Appu Goundan, Google Open Source Security TeamToday, we are announcing the General Availability 1.0 version of rules_oci, an open-sourced Bazel plugin (“ruleset”) that makes it simpler and more secure to build container images with Bazel. This effort was a collaboration we had with Aspect and the Rules Authors Special Interest Group. In this post, we\'ll explain how rules_oci differs from its predecessor, rules_docker, and describe the benefits it offers for both container image security and the container community. Bazel and Distroless for supply chain securityGoogle\'s popular build and test tool, known as Bazel, is gaining fast adoption within enterprises thanks to its ability to scale to the largest codebases and handle builds in almost any language. Because Bazel manages and caches dependencies by their integrity hash, it is uniquely suited to make assurances about the supply chain based on the Trust-on-First-Use principle. One way Google uses Bazel is to build widely used Distroless base images for Docker. Distroless is a series of minimal base images which improve supply-chain security. They restrict what\'s in your runtime container to precisely what\'s necessary for your app, which is a best practice employed by Google and other tech companies that have used containers in production for many years. Using minimal base images reduces the burden of managing risks associated with security vulnerabilities, licensing, and governance issues in the supply chain for building applications.rules_oci vs rules_docker ★★
GoogleSec.webp 2023-05-03 08:20:21 Si longs mots de passe, merci pour tous les phish
So long passwords, thanks for all the phish
(lien direct)
By: Arnar Birgisson and Diana K Smetters, Identity Ecosystems and Google Account Security and Safety teamsStarting today, you can create and use passkeys on your personal Google Account. When you do, Google will not ask for your password or 2-Step Verification (2SV) when you sign in.Passkeys are a more convenient and safer alternative to passwords. They work on all major platforms and browsers, and allow users to sign in by unlocking their computer or mobile device with their fingerprint, face recognition or a local PIN.Using passwords puts a lot of responsibility on users. Choosing strong passwords and remembering them across various accounts can be hard. In addition, even the most savvy users are often misled into giving them up during phishing attempts. 2SV (2FA/MFA) helps, but again puts strain on the user with additional, unwanted friction and still doesn\'t fully protect against phishing attacks and targeted attacks like "SIM swaps" for SMS verification. Passkeys help address all these issues.Creating passkeys on your Google AccountWhen you add a passkey to your Google Account, we will start asking for it when you sign in or perform sensitive actions on your account. The passkey itself is stored on your local computer or mobile device, which will ask for your screen lock biometrics or PIN to confirm it\'s really you. Biometric data is never shared with Google or any other third party – the screen lock only unlocks the passkey locally.Unlike passwords, passkeys can only exist on your devices. They cannot be written down or accidentally given to a bad actor. When you use a passkey to sign ★★★
GoogleSec.webp 2023-05-02 09:01:05 Google et Apple Lead Initiative pour une spécification de l'industrie pour traiter le suivi indésirable
Google and Apple lead initiative for an industry specification to address unwanted tracking
(lien direct)
Companies welcome input from industry participants and advocacy groups on a draft specification to alert users in the event of suspected unwanted tracking Location-tracking devices help users find personal items like their keys, purse, luggage, and more through crowdsourced finding networks. However, they can also be misused for unwanted tracking of individuals. Today Google and Apple jointly submitted a proposed industry specification to help combat the misuse of Bluetooth location-tracking devices for unwanted tracking. The first-of-its-kind specification will allow Bluetooth location-tracking devices to be compatible with unauthorized tracking detection and alerts across Android and iOS platforms. Samsung, Tile, Chipolo, eufy Security, and Pebblebee have expressed support for the draft specification, which offers best practices and instructions for manufacturers, should they choose to build these capabilities into their products.“Bluetooth trackers have created tremendous user benefits but also bring the potential of unwanted tracking, which requires industry-wide action to solve,” said Dave Burke, Google\'s vice president of Engineering for Android. “Android has an unwavering commitment to protecting users and will continue to develop strong safeguards and collaborate with the industry to help combat the misuse of Bluetooth tracking devices.” “Apple launched AirTag to give users the peace of mind knowing where to find their most important items,” said Ron Huang, Apple\'s vice president of Sensing and Connectivity. “We built AirTag and the Find My network with a set of proactive features to discourage unwanted tracking, a first in the industry, and we continue to make improvements to help ensure the technology is being used as intended. “This new industry specification builds upon the AirTag protections, and through collaboration with Google, results in a critical step forward to help combat unwanted tracking across iOS and Android.” In addition to incorporating feedback from device manufacturers, input from various safety and advocacy groups has been integrated into the development of the specification. “The National Network to End Domestic Violence has been advocating for universal standards to protect survivors – and all people – from the misuse of bluetooth tracking devices. This collaboration and the resulting standards are a significant step forward. NNEDV is encouraged by this progress,” said Erica Olsen, the National Network to End Domestic Violence\'s senior director of its Safety Net Project. “These new standards will minimize opportunities for abuse of this technology and decrease burdens on survivors in detecting unwanted trackers. We are grateful for these efforts and look forward to continuing to work together to address unwanted tracking and misuse.” “Today\'s release of a draft specification is a welcome step to confront harmful misuses of Bluetooth location trackers,” said Alexandra Reeve Givens, the Center for Democracy & Technology\'s president and CEO. “CDT continues to focus on ways to make these devices more detectable and reduce the likelihood that they will be used to track people. A key ★★
GoogleSec.webp 2023-04-28 11:59:39 Transactions de paiement mobile sécurisées activées par la confirmation protégée Android
Secure mobile payment transactions enabled by Android Protected Confirmation
(lien direct)
Publié par Rae Wang, directeur de la gestion des produits (Android Security and Privacy Team) Contrairement à d'autres OS mobiles, Android est construit avec une architecture transparente et open source.Nous croyons fermement que nos utilisateurs et l'écosystème mobile en général devraient être en mesure de vérifier la sécurité et la sécurité d'Android \\ et pas seulement nous croire sur parole. Nous avons démontré notre profonde croyance à la transparence de la sécurité en investissant dans des fonctionnalités qui permettent aux utilisateurs de confirmer que ce qu'ils attendent se produit sur leur appareil. L'assurance de la confirmation protégée par Android L'une de ces fonctionnalités est la confirmation protégée par Android, une API qui permet aux développeurs d'utiliser le matériel Android pour fournir aux utilisateurs encore plus d'assurance qu'une action critique a été exécutée en toute sécurité.À l'aide d'une interface utilisateur protégé par le matériel, la confirmation protégée par Android peut aider les développeurs à vérifier l'intention d'action d'un utilisateur avec un degré de confiance très élevé.Cela peut être particulièrement utile dans un certain nombre de moments d'utilisateur & # 8211;Comme pendant les transactions de paiement mobile - qui bénéficient considérablement d'une vérification et d'une sécurité supplémentaires. Nous sommes ravis de voir que la confirmation protégée par Android attire désormais l'attention de l'écosystème en tant que méthode de pointe pour confirmer les actions des utilisateurs critiques via le matériel.Récemment, UBS Group AG et la Bern University of Applied Sciences, cofinancée par Innosuisse et UBS Next, Annommé Ils travaillent avec Google sur un projet pilote pour établir une confirmation protégée en tant que norme d'interface programmable d'application courante.Dans un pilote prévu pour 2023, les clients bancaires en ligne UBS avec des appareils Pixel 6 ou 7 peuvent utiliser une confirmation protégée Android soutenue par Strongbox, un coffre-fort certifié avec protection d'attaque physique, pour confirmer les paiements et vérifier les achats en ligne grâce à une confirmation matérielle dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur conviction dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation dans leur confirmation de leur matériel dans leurUBS Access App. démontrant une utilisation réelle pour la confirmation de protection Android Nous avons travaillé en étroite collaboration avec UBS pour donner vie à ce pilote et nous assurer qu'ils peuvent le tester sur les appareils Google Pixel.La démonstration des cas d'utilisation du monde réel qui sont activés par la confirmation protégée par Android débloque la promesse de cette tech ★★★
GoogleSec.webp 2023-04-27 11:01:43 Comment nous avons combattu de mauvaises applications et de mauvais acteurs en 2022
How we fought bad apps and bad actors in 2022
(lien direct)
Posted by Anu Yamunan and Khawaja Shams (Android Security and Privacy Team), and Mohet Saxena (Compute Trust and Safety) Keeping Google Play safe for users and developers remains a top priority for Google. Google Play Protect continues to scan billions of installed apps each day across billions of Android devices to keep users safe from threats like malware and unwanted software. In 2022, we prevented 1.43 million policy-violating apps from being published on Google Play in part due to new and improved security features and policy enhancements - in combination with our continuous investments in machine learning systems and app review processes. We also continued to combat malicious developers and fraud rings, banning 173K bad accounts, and preventing over $2 billion in fraudulent and abusive transactions. We\'ve raised the bar for new developers to join the Play ecosystem with phone, email, and other identity verification methods, which contributed to a reduction in accounts used to publish violative apps. We continued to partner with SDK providers to limit sensitive data access and sharing, enhancing the privacy posture for over one million apps on Google Play. With strengthened Android platform protections and policies, and developer outreach and education, we prevented about 500K submitted apps from unnecessarily accessing sensitive permissions over the past 3 years. Developer Support and Collaboration to Help Keep Apps Safe As the Android ecosystem expands, it\'s critical for us to work closely with the developer community to ensure they have the tools, knowledge, and support to build secure and trustworthy apps that respect user data security and privacy. In 2022, the App Security Improvements program helped developers fix ~500K security weaknesses affecting ~300K apps with a combined install base of approximately 250B installs. We also launched the Google Play SDK Index to help developers evaluate an SDK\'s reliability and safety and make informed decisions about whether an SDK is right for their business and their users. We will keep working closely with SDK providers to improve app and SDK safety, limit how user data is shared, and improve lines of communication with app developers. We also recently launched new features and resources to give developers a better policy experience. We\'ve expanded our Helpline pilot to give more developers direct policy phone support. And we piloted the Google Play Developer Community so more developers can discuss policy questions and exchange best practices on how to build Malware Prediction Uber ★★★★
GoogleSec.webp 2023-04-26 11:00:21 Célébrer SLSA v1.0: sécuriser la chaîne d'approvisionnement des logiciels pour tout le monde
Celebrating SLSA v1.0: securing the software supply chain for everyone
(lien direct)
Bob Callaway, Staff Security Engineer, Google Open Source Security team Last week the Open Source Security Foundation (OpenSSF) announced the release of SLSA v1.0, a framework that helps secure the software supply chain. Ten years of using an internal version of SLSA at Google has shown that it\'s crucial to warding off tampering and keeping software secure. It\'s especially gratifying to see SLSA reaching v1.0 as an open source project-contributors have come together to produce solutions that will benefit everyone. SLSA for safer supply chains Developers and organizations that adopt SLSA will be protecting themselves against a variety of supply chain attacks, which have continued rising since Google first donated SLSA to OpenSSF in 2021. In that time, the industry has also seen a U.S. Executive Order on Cybersecurity and the associated NIST Secure Software Development Framework (SSDF) to guide national standards for software used by the U.S. government, as well as the Network and Information Security (NIS2) Directive in the European Union. SLSA offers not only an onramp to meeting these standards, but also a way to prepare for a climate of increased scrutiny on software development practices. As organizations benefit from using SLSA, it\'s also up to them to shoulder part of the burden of spreading these benefits to open source projects. Many maintainers of the critical open source projects that underpin the internet are volunteers; they cannot be expected to do all the work when so many of the rewards of adopting SLSA roll out across the supply chain to benefit everyone. Supply chain security for all That\'s why beyond contributing to SLSA, we\'ve also been laying the foundation to integrate supply chain solutions directly into the ecosystems and platforms used to create open source projects. We\'re also directly supporting open source maintainers, who often cite lack of time or resources as limiting factors when making security improvements to their projects. Our Open Source Security Upstream Team consists of developers who spend 100% of their time contributing to critical open source projects to make security improvements. For open source developers who choose to adopt SLSA on their own, we\'ve funded the Secure Open Source Rewards Program, which pays developers directly for these types of security improvements. Currently, open source developers who want to secure their builds can use the free SLSA L3 GitHub Builder, which requires only a one-time adjustment to the traditional build process implemented through GitHub actions. There\'s also the SLSA Verifier tool for software consumers. Users of npm-or Node Package Manager, the world\'s largest software repository-can take advantage of their recently released beta SLSA integration, which streamlines the process of creating and verifying SLSA provenance through the npm command line interface. We\'re also supporting the integration of Sigstore into many major Tool Patching ★★
GoogleSec.webp 2023-04-24 12:00:03 Google Authenticator prend désormais en charge la synchronisation du compte Google
Google Authenticator now supports Google Account synchronization
(lien direct)
Christiaan Brand, Group Product Manager ★★★
GoogleSec.webp 2023-04-18 12:00:25 Hébergeant en toute sécurité les données des utilisateurs dans les applications Web modernes
Securely Hosting User Data in Modern Web Applications
(lien direct)
Posted by David Dworken, Information Security Engineer, Google Security Team Many web applications need to display user-controlled content. This can be as simple as serving user-uploaded images (e.g. profile photos), or as complex as rendering user-controlled HTML (e.g. a web development tutorial). This has always been difficult to do securely, so we\'ve worked to find easy, but secure solutions that can be applied to most types of web applications. Classical Solutions for Isolating Untrusted Content The classic solution for securely serving user-controlled content is to use what are known as “sandbox domains”. The basic idea is that if your application\'s main domain is example.com, you could serve all untrusted content on exampleusercontent.com. Since these two domains are cross-site, any malicious content on exampleusercontent.com can\'t impact example.com. This approach can be used to safely serve all kinds of untrusted content including images, downloads, and HTML. While it may not seem like it is necessary to use this for images or downloads, doing so helps avoid risks from content sniffing, especially in legacy browsers. Sandbox domains are widely used across the industry and have worked well for a long time. But, they have two major downsides: Applications often need to restrict content access to a single user, which requires implementing authentication and authorization. Since sandbox domains purposefully do not share cookies with the main application domain, this is very difficult to do securely. To support authentication, sites either have to rely on capability URLs, or they have to set separate authentication cookies for the sandbox domain. This second method is especially problematic in the modern web where many browsers restrict cross-site cookies by default. While user content is isolated from the main site, it isn\'t isolated from other user content. This creates the risk of malicious user content attacking other data on the sandbox domain (e.g. via reading same-origin data). It is also worth noting that sandbox domains help mitigate phishing risks since resources are clearly segmented onto an isolated domain. Modern Solutions for Serving User Content Over time the web has evolved, and there are now easier, more secure ways to serve untrusted content. There are many different approaches here, so we will outline two solutions that are currently in wide use at Google. Approach 1: Serving Inactive User Content If a site only needs to serve inactive user content (i.e. content that is not HTML/JS, for example images and downloads), this can now be safely done without an isolated sandbox domain. There are two key steps: Always set the Content-Type header to a well-known MIME type that is supported by all browsers and guaranteed not to contain active content (when in doubt, application/octet-stream is a safe choice). In addition, always set the below response headers to ensure that the browser fully isolates the response. Threat ★★
GoogleSec.webp 2023-04-13 12:04:31 Sécurité de la chaîne d'approvisionnement pour GO, partie 1: Gestion de la vulnérabilité
Supply chain security for Go, Part 1: Vulnerability management
(lien direct)
Posted by Julie Qiu, Go Security & Reliability and Oliver Chang, Google Open Source Security Team High profile open source vulnerabilities have made it clear that securing the supply chains underpinning modern software is an urgent, yet enormous, undertaking. As supply chains get more complicated, enterprise developers need to manage the tidal wave of vulnerabilities that propagate up through dependency trees. Open source maintainers need streamlined ways to vet proposed dependencies and protect their projects. A rise in attacks coupled with increasingly complex supply chains means that supply chain security problems need solutions on the ecosystem level. One way developers can manage this enormous risk is by choosing a more secure language. As part of Google\'s commitment to advancing cybersecurity and securing the software supply chain, Go maintainers are focused this year on hardening supply chain security, streamlining security information to our users, and making it easier than ever to make good security choices in Go. This is the first in a series of blog posts about how developers and enterprises can secure their supply chains with Go. Today\'s post covers how Go helps teams with the tricky problem of managing vulnerabilities in their open source packages. Extensive Package Insights Before adopting a dependency, it\'s important to have high-quality information about the package. Seamless access to comprehensive information can be the difference between an informed choice and a future security incident from a vulnerability in your supply chain. Along with providing package documentation and version history, the Go package discovery site links to Open Source Insights. The Open Source Insights page includes vulnerability information, a dependency tree, and a security score provided by the OpenSSF Scorecard project. Scorecard evaluates projects on more than a dozen security metrics, each backed up with supporting information, and assigns the project an overall score out of ten to help users quickly judge its security stance (example). The Go package discovery site puts all these resources at developers\' fingertips when they need them most-before taking on a potentially risky dependency. Curated Vulnerability Information Large consumers of open source software must manage many packages and a high volume of vulnerabilities. For enterprise teams, filtering out noisy, low quality advisories and false positives from critical vulnerabilities is often the most important task in vulnerability management. If it is difficult to tell which vulnerabilities are important, it is impossible to properly prioritize their remediation. With granular advisory details, the Go vulnerability database removes barriers to vulnerability prioritization and remediation. All vulnerability database entries are reviewed and curated by the Go security team. As a result, entries are accurate and include detailed metadata to improve the quality of vulnerability scans and to make vulnerability information more actionable. This metadata includes information on affected functions, operating systems, and architectures. With this information, vulnerability scanners can reduce the number of false po Tool Vulnerability ★★
GoogleSec.webp 2023-04-11 12:11:33 Annonce de l'API DEPS.DEV: données de dépendance critiques pour les chaînes d'approvisionnement sécurisées
Announcing the deps.dev API: critical dependency data for secure supply chains
(lien direct)
Posted by Jesper Sarnesjo and Nicky Ringland, Google Open Source Security Team Today, we are excited to announce the deps.dev API, which provides free access to the deps.dev dataset of security metadata, including dependencies, licenses, advisories, and other critical health and security signals for more than 50 million open source package versions. Software supply chain attacks are increasingly common and harmful, with high profile incidents such as Log4Shell, Codecov, and the recent 3CX hack. The overwhelming complexity of the software ecosystem causes trouble for even the most diligent and well-resourced developers. We hope the deps.dev API will help the community make sense of complex dependency data that allows them to respond to-or even prevent-these types of attacks. By integrating this data into tools, workflows, and analyses, developers can more easily understand the risks in their software supply chains. The power of dependency data As part of Google\'s ongoing efforts to improve open source security, the Open Source Insights team has built a reliable view of software metadata across 5 packaging ecosystems. The deps.dev data set is continuously updated from a range of sources: package registries, the Open Source Vulnerability database, code hosts such as GitHub and GitLab, and the software artifacts themselves. This includes 5 million packages, more than 50 million versions, from the Go, Maven, PyPI, npm, and Cargo ecosystems-and you\'d better believe we\'re counting them! We collect and aggregate this data and derive transitive dependency graphs, advisory impact reports, OpenSSF Security Scorecard information, and more. Where the deps.dev website allows human exploration and examination, and the BigQuery dataset supports large-scale bulk data analysis, this new API enables programmatic, real-time access to the corpus for integration into tools, workflows, and analyses. The API is used by a number of teams internally at Google to support the security of our own products. One of the first publicly visible uses is the GUAC integration, which uses the deps.dev data to enrich SBOMs. We have more exciting integrations in the works, but we\'re most excited to see what the greater open source community builds! We see the API as being useful for tool builders, researchers, and tinkerers who want to answer questions like: What versions are available for this package? What are the licenses that cover this version of a package-or all the packages in my codebase? How many dependencies does this package have? What are they? Does the latest version of this package include changes to dependencies or licenses? What versions of what packages correspond to this file? Taken together, this information can help answer the most important overarching question: how much risk would this dependency add to my project? The API can help surface critical security information where and when developers can act. This data can be integrated into: IDE Plugins, to make dependency and security information immediately available. CI Tool Vulnerability ★★
GoogleSec.webp 2023-03-08 12:04:53 OSV and the Vulnerability Life Cycle (lien direct) Posted by Oliver Chang and Andrew Pollock, Google Open Source Security Team It is an interesting time for everyone concerned with open source vulnerabilities. The U.S. Executive Order on Improving the Nation's Cybersecurity requirements for vulnerability disclosure programs and assurances for software used by the US government will go into effect later this year. Finding and fixing security vulnerabilities has never been more important, yet with increasing interest in the area, the vulnerability management space has become fragmented-there are a lot of new tools and competing standards. In 2021, we announced the launch of OSV, a database of open source vulnerabilities built partially from vulnerabilities found through Google's OSS-Fuzz program. OSV has grown since then and now includes a widely adopted OpenSSF schema and a vulnerability scanner. In this blog post, we'll cover how these tools help maintainers track vulnerabilities from discovery to remediation, and how to use OSV together with other SBOM and VEX standards. Vulnerability Databases The lifecycle of a known vulnerability begins when it is discovered. To reach developers, the vulnerability needs to be added to a database. CVEs are the industry standard for describing vulnerabilities across all software, but there was a lack of an open source centric database. As a result, several independent vulnerability databases exist across different ecosystems. To address this, we announced the OSV Schema to unify open source vulnerability databases. The schema is machine readable, and is designed so dependencies can be easily matched to vulnerabilities using automation. The OSV Schema remains the only widely adopted schema that treats open source as a first class citizen. Since becoming a part of OpenSSF, the OSV Schema has seen adoption from services like GitHub, ecosystems such as Rust and Python, and Linux distributions such as Rocky Linux. Thanks to such wide community adoption of the OSV Schema, OSV.dev is able to provide a distributed vulnerability database and service that pulls from language specific authoritative sources. In total, the OSV.dev database now includes 43,302 vulnerabilities from 16 ecosystems as of March 2023. Users can check OSV for a comprehensive view of all known vulnerabilities in open source. Every vulnerability in OSV.dev contains package manager versions and git commit hashes, so open source users can easily determine if their packages are impacted because of the familiar style of versioning. Maintainers are also familiar with OSV's community driven and distributed collaboration on the development of OSV's database, tools, and schema. Matching The next step in managing vulnerabilities is to determine project dependencies and their associated vulnerabilities. Last December we released OSV-Scanner, a free, open source tool which scans software projects' lockfiles, SBOMs, or git repositories to identify vulnerabilities found in the Tool Vulnerability ★★★★
GoogleSec.webp 2023-03-08 11:59:13 Thank you and goodbye to the Chrome Cleanup Tool (lien direct) Posted by Jasika Bawa, Chrome Security Team Starting in Chrome 111 we will begin to turn down the Chrome Cleanup Tool, an application distributed to Chrome users on Windows to help find and remove unwanted software (UwS). Origin story The Chrome Cleanup Tool was introduced in 2015 to help users recover from unexpected settings changes, and to detect and remove unwanted software. To date, it has performed more than 80 million cleanups, helping to pave the way for a cleaner, safer web. A changing landscape In recent years, several factors have led us to reevaluate the need for this application to keep Chrome users on Windows safe. First, the user perspective – Chrome user complaints about UwS have continued to fall over the years, averaging out to around 3% of total complaints in the past year. Commensurate with this, we have observed a steady decline in UwS findings on users' machines. For example, last month just 0.06% of Chrome Cleanup Tool scans run by users detected known UwS. Next, several positive changes in the platform ecosystem have contributed to a more proactive safety stance than a reactive one. For example, Google Safe Browsing as well as antivirus software both block file-based UwS more effectively now, which was originally the goal of the Chrome Cleanup Tool. Where file-based UwS migrated over to extensions, our substantial investments in the Chrome Web Store review process have helped catch malicious extensions that violate the Chrome Web Store's policies. Finally, we've observed changing trends in the malware space with techniques such as Cookie Theft on the rise – as such, we've doubled down on defenses against such malware via a variety of improvements including hardened authentication workflows and advanced heuristics for blocking phishing and social engineering emails, malware landing pages, and downloads. What to expect Starting in Chrome 111, users will no longer be able to request a Chrome Cleanup Tool scan through Safety Check or leverage the "Reset settings and cleanup" option offered in chrome://settings on Windows. Chrome will also remove the component that periodically scans Windows machines and prompts users for cleanup should it find anything suspicious. Even without the Chrome Cleanup Tool, users are automatically protected by Safe Browsing in Chrome. Users also have the option to turn on Enhanced protection by navigating to chrome://settings/security – this mode substantially increases protection from dangerous websites and downloads by sharing real-time data with Safe Browsing. While we'll miss the Chrome Cleanup Tool, we wanted to take this opportunity to acknowledge its role in combating UwS for the past 8 years. We'll continue to monitor user feedback and trends in the malware ecosystem, and when adversaries adapt their techniques again – which they will – we'll be at the ready. As always, please feel free to send us feedback or find us on Twitter @googlechrome. Malware Tool ★★★
GoogleSec.webp 2023-03-02 12:42:15 Google Trust Services now offers TLS certificates for Google Domains customers (lien direct) Andy Warner, Google Trust Services, and Carl Krauss, Product Manager, Google DomainsWe're excited to announce changes that make getting Google Trust Services TLS certificates easier for Google Domains customers. With this integration, all Google Domains customers will be able to acquire public certificates for their websites at no additional cost, whether the site runs on a Google service or uses another provider. Additionally, Google Domains is now making an API available to allow for DNS-01 challenges with Google Domains DNS servers to issue and renew certificates automatically.Like the existing Google Cloud integration, Automatic Certificate Management Environment (ACME) protocol is used to enable seamless automatic lifecycle management of TLS certificates. These certificates are issued by the same Certificate Authority (CA) Google uses for its own sites, so they are widely supported across the entire spectrum of devices used to access your services.How do I use it?Using ACME ensures your certificates are renewed automatically and many hosting services already support ACME. If you're running your own web servers / services, there are ACME clients that integrate easily with common servers. To use this feature, you will need an API key called an External Account Binding key. This enables your certificate requests to be associated with your Google Domains account. You can get an API key by visiting Google Domains and navigating to the Security page for your domain. There you'll see a section for Google Trust Services where you can get your EAB Key.Example of EAB Credentials in Google DomainsAs an example, with the popular Certbot ACME client, the configuration to register an account looks like:certbot register --email --no-eff-email --server "https://dv.acme-v02.api.pki.goog/directory"  --eab-kid "" --eab-hmac-key ""The EAB_KEY_ID and EAB_HMAC_KEY are both provided on your Google Domains security page.After the account is created, you may issue certificates by running:certbot certonly -d --server "https://dv.acme-v02.api.pki.goog/directory" --standaloneThen fo ★★
GoogleSec.webp 2023-03-01 11:59:44 8 ways to secure Chrome browser for Google Workspace users (lien direct) Posted by Kiran Nair, Product Manager, Chrome Browser Your journey towards keeping your Google Workspace users and data safe, starts with bringing your Chrome browsers under Cloud Management at no additional cost. Chrome Browser Cloud Management is a single destination for applying Chrome Browser policies and security controls across Windows, Mac, Linux, iOS and Android. You also get deep visibility into your browser fleet including which browsers are out of date, which extensions your users are using and bringing insight to potential security blindspots in your enterprise. Managing Chrome from the cloud allows Google Workspace admins to enforce enterprise protections and policies to the whole browser on fully managed devices, which no longer requires a user to sign into Chrome to have policies enforced. You can also enforce policies that apply when your managed users sign in to Chrome browser on any Windows, Mac, or Linux computer (via Chrome Browser user-level management) --not just on corporate managed devices. This enables you to keep your corporate data and users safe, whether they are accessing work resources from fully managed, personal, or unmanaged devices used by your vendors. Getting started is easy. If your organization hasn't already, check out this guide for steps on how to enroll your devices. 2. Enforce built-in protections against Phishing, Ransomware & Malware Chrome uses Google's Safe Browsing technology to help protect billions of devices every day by showing warnings to users when they attempt to navigate to dangerous sites or download dangerous files. Safe Browsing is enabled by default for all users when they download Chrome. As an administrator, you can prevent your users from disabling Safe Browsing by enforcing the SafeBrowsingProtectionLevel policy. Over the past few years, we've seen threats on the web becoming increasingly sophisticated. Turning on Enhanced Safe Browsing will substantially increase protection Ransomware Malware Tool Threat Guideline Cloud ★★★
GoogleSec.webp 2023-02-28 09:00:14 Our commitment to fighting invalid traffic on Connected TV (lien direct) Posted by Michael Spaulding, Senior Product Manager, Ad Traffic QualityConnected TV (CTV) has not only transformed the entertainment world, it has also created a vibrant new platform for digital advertising. However, as with any innovative space, there are challenges that arise, including the emergence of bad actors aiming to siphon money away from advertisers and publishers through fraudulent or invalid ad traffic. Invalid traffic is an evolving challenge that has the potential to affect the integrity and health of digital advertising on CTV. However, there are steps the industry can take to combat invalid traffic and foster a clean, trustworthy, and sustainable ecosystem.Information sharing and following best practicesEvery player across the digital advertising ecosystem has the opportunity to help reduce the risk of CTV ad fraud. It starts by spreading awareness across the industry and building a commitment among partners to share best practices for defending against invalid traffic. Greater transparency and communication are crucial to creating lasting solutions.One key best practice is contributing to and using relevant industry standards. We encourage CTV inventory providers to follow the CTV/OTT Device & App Identification Guidelines and IFA Guidelines. These guidelines, both of which were developed by the IAB Tech Lab, foster greater transparency, which in turn reduces the risk of invalid traffic on CTV. More information and details about using these resources can be found in the following guide: Protecting your ad-supported CTV experiences.Collaborating on standards and solutionsNo single company or industry group can solve this challenge on their own, we need to work collaboratively to solve the problem. Fortunately, we're already seeing constructive efforts in this direction with industry-wide standards.For example, the broad implementation of the IAB Tech Lab's app-ads.txt and its web counterpart, ads.txt, have brought greater transparency to the digital advertising supply chain and have helped combat ad fraud by allowing advertisers to verify the sellers from whom they buy inventory. In 2021, the IAB Tech Lab extended the app-ads.txt standard to CTV in order to better protect and support CTV advertisers. This update is the first of several industry-wide steps that have been taken to further protect CTV advertising. In early 2022, the IAB Tech Lab released the ads.cert 2.0 “protocol suite,” along with a proposal to utilize this new standard to secure server-side connections (including for server-side ad insertion). Ads.cert 2.0 will also power future industry standards focused on securing the supply chain and preventing misrepresentation.In addition to these efforts, the Media Rating Council (MRC) also engaged with stakeholders to develop its Guideline ★★★
GoogleSec.webp 2023-02-23 11:59:32 Moving Connected Device Security Standards Forward (lien direct) Posted by Eugene Liderman, Director of Mobile Security Strategy, Google As Mobile World Congress approaches, we have the opportunity to have deep and meaningful conversations across the industry about the present and future of connected device security. Ahead of the event, we wanted to take a moment to recognize and share additional details on the notable progress being made to form harmonized connected device security standards and certification initiatives that provide users with better transparency about how their sensitive data is protected. Supporting the GSMA Working Party for Mobile Device Security TransparencyWe're pleased to support and participate in the recently announced GSMA working party, which will develop a first-of-its-kind smartphone security certification program. The program will leverage the Consumer Mobile Device Protection Profile (CMD PP) specification released by ETSI, a European Standards Development Organization (SDO), and will provide a consistent way to evaluate smartphones for critical capabilities like encryption, security updates, biometrics, networking, trusted hardware, and more. This initiative should help address a significant gap in the market for consumers and policy makers, who will greatly benefit from a new, central security resource. Most importantly, these certification programs will evaluate connected devices across industry-accepted criteria. Widely-used devices, including smartphones and tablets, which currently do not have a familiar security benchmark or system in place, will be listed with key information on device protection capabilities to bring more transparency to users. We hope this industry-run certification program can also benefit users and support policy makers in their work as they address baseline requirements and harmonization of standards.As policy makers consider changes through regulation and legislation, such as the UK's Product Security and Telecommunications Infrastructure Act (PSTI), and emerging regulation like the EU Cyber Security and Cyber Resilience Acts, we share the concerns that today we are not equipped with globally recognized standards that are critical to increased security across the ecosystem. We join governments in the call to come together to ensure that we can build workable, harmonized standards to protect the security of users and mobile infrastructure today and build the resilience needed to protect our future. The Importance of Harmonized Standards for Connected DevicesConnected devices, not just smartphones, are increasingly becoming the primary touchpoint for the most important aspects of our personal lives. From controlling the temperature of your home, to tracking your latest workout – connected devices have become embedded in our day-to-day tasks and activities. As consumers increasingly entrust more of their lives to their connected devices, they're right to question the security protections provided and demand more transparency from manufacturers. After we participated in a recent White House Workshop on IoT security labeling, we shared more about our commitment to security and transparency by announcing the extension of device security assessments – which started with Pixel 3 and now includes Nest, and Fitbit hardware. We have and always will strive to ensure our newly released products comply with the most prevalent security baselines that are defined by industry-recognized standards organizations. We will also ★★★★
GoogleSec.webp 2023-02-22 12:01:42 Vulnerability Reward Program: 2022 Year in Review (lien direct) Posted by Sarah Jacobus, Vulnerability Rewards Team It has been another incredible year for the Vulnerability Reward Programs (VRPs) at Google! Working with security researchers throughout 2022, we have been able to identify and fix over 2,900 security issues and continue to make our products more secure for our users around the world. We are thrilled to see significant year over year growth for our VRPs, and have had yet another record breaking year for our programs! In 2022 we awarded over $12 million in bounty rewards – with researchers donating over $230,000 to a charity of their choice. As in past years, we are sharing our 2022 Year in Review statistics across all of our programs. We would like to give a special thank you to all of our dedicated researchers for their continued work with our programs - we look forward to more collaboration in the future! AndroidThe Android VRP had an incredible record breaking year in 2022 with $4.8 million in rewards and the highest paid report in Google VRP history of $605,000! In our continued effort to ensure the security of Google device users, we have expanded the scope of Android and Google Devices in our program and are now incentivizing vulnerability research in the latest versions of Google Nest and Fitbit! For more information on the latest program version and qualifying vulnerability reports, please visit our public rules page. We are also excited to share that the invite-only Android Chipset Security Reward Program (ACSRP) - a private vulnerability reward program offered by Google in collaboration with manufacturers of Android chipsets - rewarded $486,000 in 2022 and received over 700 valid security reports. We would like to give a special shoutout to some of our top researchers, whose continued hard work helps to keep Android safe and secure: Submitting an impressive 200+ vulnerabilities to the Android VRP this year, Aman Pandey of Bugsmirror remains one of our program's top researchers. Since submitting their first report in 2019, Aman has reported more than 500 vulnerabilities to the program. Their hard work helps ensure the safety of our users; a huge thank you for all of their hard work! Zinuo Han of OPPO Amber Security Lab Vulnerability Guideline ★★
GoogleSec.webp 2023-02-21 12:29:09 Hardening Firmware Across the Android Ecosystem (lien direct) Posted by Roger Piqueras Jover, Ivan Lozano, Sudhi Herle, and Stephan Somogyi, Android Team A modern Android powered smartphone is a complex hardware device: Android OS runs on a multi-core CPU - also called an Application Processor (AP). And the AP is one of many such processors of a System On Chip (SoC). Other processors on the SoC perform various specialized tasks - such as security functions, image & video processing, and most importantly cellular communications. The processor performing cellular communications is often referred to as the baseband. For the purposes of this blog, we refer to the software that runs on all these other processors as “Firmware”. Securing the Android Platform requires going beyond the confines of the Application Processor (AP). Android's defense-in-depth strategy also applies to the firmware running on bare-metal environments in these microcontrollers, as they are a critical part of the attack surface of a device. A popular attack vector within the security research community As the security of the Android Platform has been steadily improved, some security researchers have shifted their focus towards other parts of the software stack, including firmware. Over the last decade there have been numerous publications, talks, Pwn2Own contest winners, and CVEs targeting exploitation of vulnerabilities in firmware running in these secondary processors. Bugs remotely exploitable over the air (eg. WiFi and cellular baseband bugs) are of particular concern and, therefore, are popular within the security research community. These types of bugs even have their own categorization in well known 3rd party exploit marketplaces. Regardless of whether it is remote code execution within the WiFi SoC or within the cellular baseband, a common and resonating theme has been the consistent lack of exploit mitigations in firmware. Conveniently, Android has significant experience in enabling exploit mitigations across critical attack surfaces. Applying years worth of lessons learned in systems hardening Over the last few years, we have successfully enabled compiler-based mitigations in Android - on the AP - which add additional layers of defense across the platform, making it harder to build reproducible exploits and to prevent certain types of bugs from becoming vulnerabilities. Building on top of these successes and lessons learned, we're applying the same principles to hardening the security of firmware that runs outside of Android per se, directly on the bare-metal hardware. In particular, we are working with our ecosystem partners in several areas aimed at hardening the security of firmware that interacts with Android: Exploring and enabling compiler-based sanitizers (Bound Vulnerability ★★★★
GoogleSec.webp 2023-02-13 12:01:11 The US Government says companies should take more responsibility for cyberattacks. We agree. (lien direct) Posted by Kent Walker, President, Global Affairs & Chief Legal Officer, Google & Alphabet and Royal Hansen, Vice President of Engineering for Privacy, Safety, and Security Should companies be responsible for cyberattacks? The U.S. government thinks so – and frankly, we agree. Jen Easterly and Eric Goldstein of the Cybersecurity and Infrastructure Security Agency at the Department of Homeland Security planted a flag in the sand: “The incentives for developing and selling technology have eclipsed customer safety in importance. […] Americans…have unwittingly come to accept that it is normal for new software and devices to be indefensible by design. They accept products that are released to market with dozens, hundreds, or even thousands of defects. They accept that the cybersecurity burden falls disproportionately on consumers and small organizations, which are often least aware of the threat and least capable of protecting themselves.”We think they're right. It's time for companies to step up on their own and work with governments to help fix a flawed ecosystem. Just look at the growing threat of ransomware, where bad actors lock up organizations' systems and demand payment or ransom to restore access. Ransomware affects every industry, in every corner of the globe – and it thrives on pre-existing vulnerabilities: insecure software, indefensible architectures, and inadequate security investment. Remember that sophisticated ransomware operators have bosses and budgets too. They increase their return on investment by exploiting outdated and insecure technology systems that are too hard to defend. Alarmingly, the most significant source of compromise is through exploitation of known vulnerabilities, holes sometimes left unpatched for years. While law enforcement works to bring ransomware operators to justice, this merely treats the symptoms of the problem. Treating the root causes will require addressing the underlying sources of digital vulnerabilities. As Easterly and Goldstein rightly point out, “secure by default” and “secure by design” should be table stakes. The bottom line: People deserve products that are secure by default and systems that are built to withstand the growing onslaught from attackers. Safety should be fundamental: built-in, enabled out of the box, and not added on as an afterthought. In other words, we need secure products, not security products. That's why Google has worked to build security in – often making it invisible – to our users. Many of our most significant security features, including innovations like SafeBrowsing, do their best work behind the scenes for our core consumer products. There's come to be an unfortunate belief that security features are cumbersome and hurt user experience. That can be true – but it doesn't need to be. We can make the safe path the easiest, most helpful path for people using our products. Our approach to multi-factor authentication – one of the most important controls to defend against phishing attacks – provides a great example. Since 2021, we've turned on 2-Step Verification (2SV) by default for hundreds of millions of people to add an additional layer of security across their online accounts. If we had simply announced 2SV as an available option for people to enroll in, it would have failed like so many other security add-ons. Instead, we pioneered an approach using in-app notifications that was so seamless and integrated, many of the millions of people we auto-enrolled never noticed they adopted 2SV. We've taken this approach even further by build Ransomware Threat ★★★
GoogleSec.webp 2023-02-01 13:00:49 Taking the next step: OSS-Fuzz in 2023 (lien direct) Posted by Oliver Chang, OSS-Fuzz team Since launching in 2016, Google's free OSS-Fuzz code testing service has helped get over 8800 vulnerabilities and 28,000 bugs fixed across 850 projects. Today, we're happy to announce an expansion of our OSS-Fuzz Rewards Program, plus new features in OSS-Fuzz and our involvement in supporting academic fuzzing research. Refreshed OSS-Fuzz rewards The OSS-Fuzz project's purpose is to support the open source community in adopting fuzz testing, or fuzzing - an automated code testing technique for uncovering bugs in software. In addition to the OSS-Fuzz service, which provides a free platform for continuous fuzzing to critical open source projects, we established an OSS-Fuzz Reward Program in 2017 as part of our wider Patch Rewards Program. We've operated this successfully for the past 5 years, and to date, the OSS-Fuzz Reward Program has awarded over $600,000 to over 65 different contributors for their help integrating new projects into OSS-Fuzz. Today, we're excited to announce that we've expanded the scope of the OSS-Fuzz Reward Program considerably, introducing many new types of rewards! These new reward types cover contributions such as: Project fuzzing coverage increases Notable FuzzBench fuzzer integrations Integrating a new sanitizer (example) that finds two new vulnerabilities These changes boost the total rewards possible per project integration from a maximum of $20,000 to $30,000 (depending on the criticality of the project). In addition, we've also established two new reward categories that reward wider improvements across all OSS-Fuzz projects, with up to $11,337 available per category. For more details, see the fully updated rules for our dedicated OSS-Fuzz Reward Program. OSS-Fuzz improvements We've continuously made improvements to OSS-Fuzz's infrastructure over the years and expanded our language offerings to cover C/C++, Go, Rust, Java, Python, and Swift, and have introduced support for new frameworks such as FuzzTest. Additionally, as part of an ongoing collaboration with Code Intelligence, we'll soon have support for JavaScript fuzzing through Jazzer.js. FuzzIntrospector support Last year, we launched the OpenSSF FuzzIntrospector tool and integrated it into OSS-Fuzz. We've continued to build on this by adding new language support and better analysis, and now C/C++, Python, and Java projects integrated into OSS-Fuzz have detailed insights on how the coverage and fuzzing effectiveness for a project can be improved. The Tool ★★★★★
GoogleSec.webp 2023-01-13 12:29:06 Sustaining Digital Certificate Security - TrustCor Certificate Distrust (lien direct) Posted by Chrome Root Program, Chrome Security Team Note: This post is a follow-up to discussions carried out on the Mozilla “Dev Security Policy” Web PKI public discussion forum Google Group in December 2022. Google Chrome communicated its distrust of TrustCor in the public forum on December 15, 2022.The Chrome Security Team prioritizes the security and privacy of Chrome's users, and we are unwilling to compromise on these values. Google includes or removes CA certificates within the Chrome Root Store as it deems appropriate for user safety in accordance with our policies. The selection and ongoing inclusion of CA certificates is done to enhance the security of Chrome and promote interoperability. Behavior that attempts to degrade or subvert security and privacy on the web is incompatible with organizations whose CA certificates are included in the Chrome Root Store. Due to a loss of confidence in its ability to uphold these fundamental principles and to protect and safeguard Chrome's users, certificates issued by TrustCor Systems will no longer be recognized as trusted by: Chrome versions 111 (landing in Beta approximately February 9, 2023 and Stable approximately March 7, 2023) and greater; and Older versions of Chrome capable of receiving Component Updates after Chrome 111's Stable release date. This change was first communicated in the Mozilla “Dev Security Policy” Web PKI public discussion forum Google Group on December 15, 2022. This change will be implemented via our existing mechanisms to respond to CA incidents via: An integrated certificate blocklist, and Removal of certificates included in the Chrome Root Store. Beginning approximately March 7, 2023, navigations to websites that use a certificate that chains to one of the roots detailed below will be considered insecure and result in a full page certificate error interstitial. Affected Certificates (SHA-256 fingerprint): d40e9c86cd8fe468c1776959f49ea774fa548684b6c406f3909261f4dce2575c 0753e940378c1bd5e3836e395daea5cb839e5046f1bd0eae1951cf10fec7c965 5a885db19c01d912c5759388938cafbbdf031ab2d48e91ee15589b42971d039c This change will be integrated into the Chromium open-source project as part of a default build. Questions about the expected behavior in specific Chromium-based browsers should be directed to their maintainers. This change will be incorporated as part of the regular Chrome release process to ensure sufficient time for testing and replacing affected certificates by website operators. Information about release timetables and milestones is available at https://chromiumdash.appspot.com/schedule. Beginning approximately February 9, 2023, website operators can preview these changes in Chrome 111 Beta. Website operators will also be able to preview the change sooner, using our Dev and Canary channels. The majority of users will not encounter behavior changes until the release of Chrome 111 to the Stable channel, approximately March 7, 2023. Summarizing security response of other Google products: Android has removed TrustCor's root CA certificates from th ★★★
GoogleSec.webp 2023-01-12 12:26:06 Supporting the Use of Rust in the Chromium Project (lien direct) Posted by Dana Jansens (she/her), Chrome Security Team We are pleased to announce that moving forward, the Chromium project is going to support the use of third-party Rust libraries from C++ in Chromium. To do so, we are now actively pursuing adding a production Rust toolchain to our build system. This will enable us to include Rust code in the Chrome binary within the next year. We're starting slow and setting clear expectations on what libraries we will consider once we're ready. In this blog post, we will discuss how we arrived at the decision to support third-party Rust libraries at this time, and not broader usage of Rust in Chromium. Why We Chose to Bring Rust into ChromiumOur goal in bringing Rust into Chromium is to provide a simpler (no IPC) and safer (less complex C++ overall, no memory safety bugs in a sandbox either) way to satisfy the rule of two, in order to speed up development (less code to write, less design docs, less security review) and improve the security (increasing the number of lines of code without memory safety bugs, decreasing the bug density of code) of Chrome. And we believe that we can use third-party Rust libraries to work toward this goal. Rust was developed by Mozilla specifically for use in writing a browser, so it's very fitting that Chromium would finally begin to rely on this technology too. Thank you Mozilla for your huge contribution to the systems software industry. Rust has been an incredible proof that we should be able to expect a language to provide safety while also being performant. We know that C++ and Rust can play together nicely, through tools like cxx, autocxx bindgen, cbindgen, diplomat, and (experimental) crubit. However there are also limitations. We can expect that the shape of these limitations will change in time through new or improved tools, but the decisions and descriptions here are based on the current state of technology. How Chromium Will Support the Use of RustThe Chrome Security team has been investing time into researching how we should approach using Rust alongside our C++ code. Understanding the implications of incrementally moving to writing Rust instead of C++, even in the middle of our software stack. What the limits of safe, simple, and reliable interop might be. Based on our research, we landed on two outcomes for Chromium. We will support interop in only a single direction, from C++ to Rust, for now. Chromium is written in C++, and the majority of stack frames are in C++ code, right from main() until exit(), which is why we chose this direction. By limiting interop to a single direction, we control the shape of the dependency tree. Rust can not depend on C++ so it cannot know about C++ types and functions, except through dependency injection. In this way, Rust can not land in arbitrary C++ code, only in functions passed through the API from C++. We will only support third-party libraries for now. Third-party libraries are written as standalone components, they don't hold implicit knowledge about the implementation of Chromium. This means they have APIs that are simpler and focused on their single task. Or, put another way, they typically have a narrow interface, without ★★★
GoogleSec.webp 2022-12-15 20:51:24 Expanding the App Defense Alliance (lien direct) Posted by Brooke Davis, Android Security and Privacy Team The App Defense Alliance launched in 2019 with a mission to protect Android users from bad apps through shared intelligence and coordinated detection between alliance partners. Earlier this year, the App Defense Alliance expanded to include new initiatives outside of malware detection and is now the home for several industry-led collaborations including Malware Mitigation, MASA (Mobile App Security Assessment) & CASA (Cloud App Security Assessment). With a new dedicated landing page at appdefensealliance.dev, the ADA has an expanded mission to protect Android users by removing threats while improving app quality across the ecosystem. Let's walk through some of the latest program updates from the past year, including the addition of new ADA members. Malware MitigationTogether, with the founding ADA members - Google, ESET, Lookout, and Zimperium, the alliance has been able to reduce the risk of app-based malware and better protect Android users. These partners have access to mobile apps as they are being submitted to the Google Play Store and scan thousands of apps daily, acting as another, vital set of eyes prior to an app going live on Play. Knowledge sharing and industry collaboration are important aspects in securing the world from attacks and that's why we're continuing to invest in the program. New ADA MembersWe're excited to see the ADA expand with the additions of McAfee and Trend Micro. Both McAfee and Trend Micro are leaders in the antivirus space and we look forward to their contributions to the program. Mobile App Security Assessment (MASA)With consumers spending four to five hours per day in mobile apps, ensuring the safety of these services is more important than ever. According to Data.ai, the pandemic accelerated existing mobile habits - with app categories like finance growing 25% YoY and users spending over 100 billion hours in shopping apps. That's why the ADA introduced MASA (Mobile App Security Assessment), which allows developers to have their apps independently validated against the Mobile Application Security Verification Standard (MASVS standard) under the OWASP Mobile Application Security project. The project's mission is to “Define the industry standard for mobile application security,” and has been used by both public and private sector organizations as a form of industry best practices when it comes to mobile application security. Developers can work directly with an ADA Authorized Lab to have their apps evaluated against a set of MASVS L1 requirements. Once successful, the app's validation is listed in the recently launched App Validation Directory, which provides users a single place to view all app validations. The Directory also allows users to access more assessment details including validation date, test lab, and a report showing all test steps and requirements. The Directory will be updated over time with new features and search functionality to make it more user friendly. The Google Play Store is the first commercial app store to recognize and display a badge for any app that has completed an independent security review through ADA MASA. The badge is displayed within an app's respective Malware Guideline Prediction Uber ★★
GoogleSec.webp 2022-12-13 13:00:47 Announcing OSV-Scanner: Vulnerability Scanner for Open Source (lien direct) Posted by Rex Pan, software engineer, Google Open Source Security TeamToday, we're launching the OSV-Scanner, a free tool that gives open source developers easy access to vulnerability information relevant to their project. Last year, we undertook an effort to improve vulnerability triage for developers and consumers of open source software. This involved publishing the Open Source Vulnerability (OSV) schema and launching the OSV.dev service, the first distributed open source vulnerability database. OSV allows all the different open source ecosystems and vulnerability databases to publish and consume information in one simple, precise, and machine readable format. The OSV-Scanner is the next step in this effort, providing an officially supported frontend to the OSV database that connects a project's list of dependencies with the vulnerabilities that affect them. OSV-Scanner Software projects are commonly built on top of a mountain of dependencies-external software libraries you incorporate into a project to add functionalities without developing them from scratch. Each dependency potentially contains existing known vulnerabilities or new vulnerabilities that could be discovered at any time. There are simply too many dependencies and versions to keep track of manually, so automation is required. Scanners provide this automated capability by matching your code and dependencies against lists of known vulnerabilities and notifying you if patches or updates are needed. Scanners bring incredible benefits to project security, which is why the 2021 U.S. Executive Order for Cybersecurity included this type of automation as a requirement for national standards on secure software development.The OSV-Scanner generates reliable, high-quality vulnerability information that closes the gap between a developer's list of packages and the information in vulnerability databases. Since the OSV.dev database is open source and distributed, it has several benefits in comparison with closed source advisory databases and scanners: Each advisory comes from an open and authoritative source (e.g. the RustSec Advisory Database) Anyone can suggest improvements to advisories, resulting in a very high quality database The OSV format unambiguously stores information about affected versions in a machine-readable format that precisely maps onto a developer's list of packages The above all results in fewer, more actionable vulnerability notifications, which reduces the time needed to resolve them Running OSV-Scanner on your project will first find all the transitive dependencies that are being used by analyzing manifests, SBOMs, and commit hashes. The scanner then connects this information with the OSV database and displays the vulnerabilities relevant to your project. Tool Vulnerability ★★★
GoogleSec.webp 2022-12-08 11:59:15 Trust in transparency: Private Compute Core (lien direct) Posted by Dave Kleidermacher, Dianne Hackborn, and Eugenio Marchiori We care deeply about privacy. We also know that trust is built by transparency. This blog, and the technical paper reference within, is an example of that commitment: we describe an important new Android privacy infrastructure called Private Compute Core (PCC). Some of our most exciting machine learning features use continuous sensing data - information from the microphone, camera, and screen. These features keep you safe, help you communicate, and facilitate stronger connections with people you care about. To unlock this new generation of innovative concepts, we built a specialized sandbox to privately process and protect this data. Android Private Compute Core PCC is a secure, isolated data processing environment inside of the Android operating system that gives you control of the data inside, such as deciding if, how, and when it is shared with others. This way, PCC can enable features like Live Translate without sharing continuous sensing data with service providers, including Google. PCC is part of Protected Computing, a toolkit of technologies that transform how, when, and where data is processed to technically ensure its privacy and safety. For example, by employing cloud enclaves, edge processing, or end-to-end encryption we ensure sensitive data remains in exclusive control of the user. How Private Compute Core works PCC is designed to enable innovative features while keeping the data needed for them confidential from other subsystems. We do this by using techniques such as limiting Interprocess Communications (IPC) binds and using isolated processes. These are included as part of the Android Open Source Project and controlled by publicly available surfaces, such as Android framework APIs. For features that run inside PCC, continuous sensing data is processed safely and seamlessly while keeping it confidential. To stay useful, any machine learning feature has to get better over time. To keep the models that power PCC features up to date, while still keeping the data private, we leverage federated learning and analytics. Network calls to improve the performance of these models can be monitored using Private Compute Services. Let us show you our work The publicly-verifiable architectures in PCC demonstrate how we strive to deliver confidentiality and control, and do it in a way that is verifiable and visible to users. In addition to this blog, we provide this transparency through public documentation and open-source code - we hope you'll have a look below. To explain in even more detail, we've published a technical whitepaper for researchers and interested members of the community. In it, we describe data protections in-depth, the processes and mechanisms we've built, and includ ★★★
GoogleSec.webp 2022-12-05 13:03:18 Enhanced Protection - The strongest level of Safe Browsing protection Google Chrome has to offer (lien direct) Posted by Benjamin Ackerman (Chrome Security and Jonathan Li (Safe Browsing) As a follow-up to a previous blog post about How Hash-Based Safe Browsing Works in Google Chrome, we wanted to provide more details about Safe Browsing's Enhanced Protection mode in Chrome. Specifically, how it came about, the protections that are offered and what it means for your data. Security and privacy have always been top of mind for Chrome. Our goal is to make security effortless for you while browsing the web, so that you can go about your day without having to worry about the links that you click on or the files that you download. This is why Safe Browsing's phishing and malware protections have been a core part of Chrome since 2007. You may have seen these in action if you have ever come across one of our red warning pages. We show these warnings whenever we believe a site that you are trying to visit or file that you are trying to download might put you at risk for an attack. To give you a better understanding of how the Enhanced Protection mode in Safe Browsing provides the strongest level of defense it's useful to know what is offered in Standard Protection. Standard Protection Enabled by default in Chrome, Standard Protection was designed to be privacy preserving at its core by using hash-based checks. This has been effective at protecting users by warning millions of users about dangerous websites. However, hash-based checks are inherently limited as they rely on lookups to a list of known bad sites. We see malicious actors moving fast and constantly evolving their tactics to avoid detection using sophisticated techniques. To counter this, we created a stronger and more customized level of protection that we could offer to users. To this end, we launched Enhanced Protection in 2020, which builds upon the Standard Protection mode in Safe Browsing to keep you safer. Enhanced Protection This is the fastest and strongest level of protection against dangerous sites and downloads that Safe Browsing offers in Chrome. It enables more advanced detection techniques that adapt quickly as malicious activity evolves. As a result, Enhanced Protection users are phished 20-35% less than users on Standard Protection. A few of these features include: Real time URL checks: By checking with Google Safe Browsing's servers in real time before navigating to an uncommon site you're visiting, Chrome provides the best protection against dangerous sites and uses advanced machine learning models to continuously stay up to date. File checks before downloading: In addition to Chrome's standard checks of downloaded files, Enhanced Protection users can choose to upload suspicious files to be scanned by Google Safe Browsing's full suite of malware detection technology before opening the file. This helps catch brand new malware that Safe Browsing has not scanned bef Malware ★★★
GoogleSec.webp 2022-12-01 11:58:33 Memory Safe Languages in Android 13 (lien direct) Posted by Jeffrey Vander Stoep For more than a decade, memory safety vulnerabilities have consistently represented more than 65% of vulnerabilities across products, and across the industry. On Android, we're now seeing something different - a significant drop in memory safety vulnerabilities and an associated drop in the severity of our vulnerabilities. Looking at vulnerabilities reported in the Android security bulletin, which includes critical/high severity vulnerabilities reported through our vulnerability rewards program (VRP) and vulnerabilities reported internally, we see that the number of memory safety vulnerabilities have dropped considerably over the past few years/releases. From 2019 to 2022 the annual number of memory safety vulnerabilities dropped from 223 down to 85. This drop coincides with a shift in programming language usage away from memory unsafe languages. Android 13 is the first Android release where a majority of new code added to the release is in a memory safe language. As the amount of new memory-unsafe code entering Android has decreased, so too has the number of memory safety vulnerabilities. From 2019 to 2022 it has dropped from 76% down to 35% of Android's total vulnerabilities. 2022 is the first year where memory safety vulnerabilities do not represent a majority of Android's vulnerabilities. Vulnerability ★★★★
GoogleSec.webp 2022-11-02 14:12:24 Our Principles for IoT Security Labeling (lien direct) Posted by Dave Kleidermacher, Eugene Liderman, and Android and Made by Google security teams We believe that security and transparency are paramount pillars for electronic products connected to the Internet. Over the past year, we've been excited to see more focused activity across policymakers, industry partners, developers, and public interest advocates around raising the security and transparency bar for IoT products. That said, the details of IoT product labeling - the definition of labeling, what labeling needs to convey in terms of security and privacy, where the label should reside, and how to achieve consumer acceptance, are still open for debate. Google has also been considering these core questions for a long time. As an operating system, IoT product provider, and the maintainer of multiple large ecosystems, we see firsthand how critical these details will be to the future of the IoT. In an effort to be a catalyst for collaboration and transparency, today we're sharing our proposed list of principles around IoT security labeling. Setting the Stage: Defining IoT Labeling IoT labeling is a complex and nuanced topic, so as an industry, we should first align on a set of labeling definitions that could help reduce potential fragmentation and offer a harmonized approach that could drive a desired outcome: Label: printed and/or digital representation of a digital product's security and/or privacy status intended to inform consumers and/or other stakeholders. A label may include both printed and digital representations; for example, a printed label may include a logo and QR code that references a digital representation of the security claims being made. Labeling scheme: a program that defines, manages, and monitors the use of labels, including but not limited to user experience, adherence to specific standards or security profiles, and lifecycle management of the label (e.g. decommissioning) Evaluation scheme: a program that publishes, manages, and monitors the security claims of digital products against security requirements and related standards; labeling schemes may rely on evaluation schemes to produce the information referred to in or by their labels. Proposed Principles for IoT Security Labeling SchemesWe believe in five core principles for IoT labeling schemes. These principles will help increase transparency against the full baseline of security criteria for IoT. These principles will also increase competition in security and push manufacturers to offer products with effective security protections, increase transparency, and help generate higher levels of assurance of protection over time. 1. A printed label must not imply trustUnlike food labels, digital security labels must be “live” labels, where security/privacy status is conveyed on a central maintained website, which ideally would be the same site hosting the evaluation scheme. A physical label, either printed on a box or visible in an app, can be used if and only if it encourages users to visit the website (e.g. scan a QR code or click a link) to obtain the real-time status. At any point in time, a digital product may become unsafe for use. For example, if a critical, in-the-wild, remote exploit of a product is discovered and cannot be mitigated (e.g. via a patch), then it may be necessary to change that product's status from safe to unsafe. Printed labels, if they convey trust implicitly such as, “certified to NNN standard” or, “3 stars”, run the danger of influencing consumers to make harmful decisions. A consumer may purchase a webcam with a “3-star” security label only to find when they return home the product has non-mitigatable vulnerabiliti Vulnerability Threat Guideline
GoogleSec.webp 2022-10-20 13:01:02 Announcing GUAC, a great pairing with SLSA (and SBOM)! (lien direct) Posted by Brandon Lum, Mihai Maruseac, Isaac Hepworth, Google Open Source Security Team Supply chain security is at the fore of the industry's collective consciousness. We've recently seen a significant rise in software supply chain attacks, a Log4j vulnerability of catastrophic severity and breadth, and even an Executive Order on Cybersecurity. It is against this background that Google is seeking contributors to a new open source project called GUAC (pronounced like the dip). GUAC, or Graph for Understanding Artifact Composition, is in the early stages yet is poised to change how the industry understands software supply chains. GUAC addresses a need created by the burgeoning efforts across the ecosystem to generate software build, security, and dependency metadata. True to Google's mission to organize and make the world's information universally accessible and useful, GUAC is meant to democratize the availability of this security information by making it freely accessible and useful for every organization, not just those with enterprise-scale security and IT funding. Thanks to community collaboration in groups such as OpenSSF, SLSA, SPDX, CycloneDX, and others, organizations increasingly have ready access to: Software Bills of Materials (SBOMs) (with SPDX-SBOM-Generator, Syft, kubernetes bom tool) signed attestations about how software was built (e.g. SLSA with SLSA3 Github Actions Builder, Google Cloud Build) vulnerability databases that aggregate information across ecosystems and make vulnerabilities more discoverable and actionable (e.g. OSV.dev, Global Security Database (GSD)). These data are useful on their own, but it's difficult to combine and synthesize the information for a more comprehensive view. The documents are scattered across different databases and producers, are attached to different ecosystem entities, and cannot be easily aggregated to answer higher-level questions about an organization's software assets. To help address this issue we've teamed up with Kusari, Purdue University, and Citi to create GUAC, a free tool to bring together many different sources of software security metadata. We're excited to share the project's proof of concept, which lets you query a small dataset of software metadata including SLSA provenance, SBOMs, and OpenSSF Scorecards. What is GUAC Graph for Understanding Artifact Composition (GUAC) aggregates software security metadata into a high fidelity graph database-normalizing entity identities and mapping standard relationships between them. Querying this graph can drive higher-level organizational outcomes such as audit, policy, risk management, and even developer assistance. Conceptually, GUAC occupies the “aggregation and synthesis” layer of the software supply chain transparency logical model: Tool Vulnerability Uber
GoogleSec.webp 2022-10-12 08:00:03 Security of Passkeys in the Google Password Manager (lien direct) Posted by Arnar Birgisson, Software EngineerWe are excited to announce passkey support on Android and Chrome for developers to test today, with general availability following later this year. In this post we cover details on how passkeys stored in the Google Password Manager are kept secure. See our post on the Android Developers Blog for a more general overview.Passkeys are a safer and more secure alternative to passwords. They also replace the need for traditional 2nd factor authentication methods such as text message, app based one-time codes or push-based approvals. Passkeys use public-key cryptography so that data breaches of service providers don't result in a compromise of passkey-protected accounts, and are based on industry standard APIs and protocols to ensure they are not subject to phishing attacks.Passkeys are the result of an industry-wide effort. They combine secure authentication standards created within the FIDO Alliance and the W3C Web Authentication working group with a common terminology and user experience across different platforms, recoverability against device loss, and a common integration path for developers. Passkeys are supported in Android and other leading industry client OS platforms.A single passkey identifies a particular user account on some online service. A user has different passkeys for different services. The user's operating systems, or software similar to today's password managers, provide user-friendly management of passkeys. From the user's point of view, using passkeys is very similar to using saved passwords, but with significantly better security.The main ingredient of a passkey is a cryptographic private key. In most cases, this private key lives only on the user's own devices, such as laptops or mobile phones. When a passkey is created, only its corresponding public key is stored by the online service. During login, the service uses the public key to verify a signature from the private key. This can only come from one of the user's devices. Additionally, the user is also required to unlock their device or credential store for this to happen, preventing sign-ins from e.g. a stolen phone. To address the common case of device loss or upgrade, a key feature enabled by passkeys is that the same private key can exist on multiple devices. This happens through platform-provided synchronization and backup. Guideline
GoogleSec.webp 2022-10-11 19:22:42 Google Pixel 7 and Pixel 7 Pro: The next evolution in mobile security (lien direct) Dave Kleidermacher, Jesse Seed, Brandon Barbello, Sherif Hanna, Eugene Liderman, Android, Pixel, and Silicon Security Teams Every day, billions of people around the world trust Google products to enrich their lives and provide helpful features – across mobile devices, smart home devices, health and fitness devices, and more. We keep more people safe online than anyone else in the world, with products that are secure by default, private by design and that put you in control. As our advancements in knowledge and computing grow to deliver more help across contexts, locations and languages, our unwavering commitment to protecting your information remains. That's why Pixel phones are designed from the ground up to help protect you and your sensitive data while keeping you in control. We're taking our industry-leading approach to security and privacy to the next level with Google Pixel 7 and Pixel 7 Pro, our most secure and private phones yet, which were recently recognized as the highest rated for security when tested among other smartphones by a third-party global research firm.1 Pixel phones also get better every few months with Feature Drops that provide the latest product updates, tips and tricks from Google. And Pixel 7 and Pixel 7 Pro users will receive at least five years of security updates2, so your Pixel gets even more secure over time. Your protection, built into PixelYour digital life and most sensitive information lives on your phone: financial information, passwords, personal data, photos – you name it. With Google Tensor G2 and our custom Titan M2 security chip, Pixel 7 and Pixel 7 Pro have multiple layers of hardware security to help keep you and your personal information safe. We take a comprehensive, end-to-end approach to security with verifiable protections at each layer - the network, application, operating system and multiple layers on the silicon itself. If you use Pixel for your business, this approach helps protect your company data, too. Google Tensor G2 is Pixel's newest powerful processor custom built with Google AI, and makes Pixel 7 faster, more efficient and secure3. Every aspect of Tensor G2 was designed to improve Pixel's performance and efficiency for great battery life, amazing photos and videos. Tensor's built-in security core works with our Titan M2 security chip to keep your personal information, PINs and passwords safe. Titan family chips are also used to protect Google Cloud data centers and Chromebooks, so the same hardware that protects Google servers also secures your sensitive information stored on Pixel. And, in a first for Google, Titan M2 hardware has now been certified under Common Criteria PP0084: the international gold standard for hardware security components also used for identity, SIM cards, and bankcard security chips. Spam Malware Vulnerability Guideline Industrial APT 40
GoogleSec.webp 2022-09-13 12:59:14 Use-after-freedom: MiraclePtr (lien direct) Posted by Adrian Taylor, Bartek Nowierski and Kentaro Hara on behalf of the MiraclePtr team Memory safety bugs are the most numerous category of Chrome security issues and we're continuing to investigate many solutions – both in C++ and in new programming languages. The most common type of memory safety bug is the “use-after-free”. We recently posted about an exciting series of technologies designed to prevent these. Those technologies (collectively, *Scan, pronounced “star scan”) are very powerful but likely require hardware support for sufficient performance. Today we're going to talk about a different approach to solving the same type of bugs. It's hard, if not impossible, to avoid use-after-frees in a non-trivial codebase. It's rarely a mistake by a single programmer. Instead, one programmer makes reasonable assumptions about how a bit of code will work, then a later change invalidates those assumptions. Suddenly, the data isn't valid as long as the original programmer expected, and an exploitable bug results. These bugs have real consequences. For example, according to Google Threat Analysis Group, a use-after-free in the ChromeHTML engine was exploited this year by North Korea. Half of the known exploitable bugs in Chrome are use-after-frees: Diving Deeper: Not All Use-After-Free Bugs Are Equal Chrome has a multi-process architecture, partly to ensure that web content is isolated into a sandboxed “renderer” process where little harm can occur. An attacker therefore usually needs to find and exploit two vulnerabilities - one to achieve code execution in the renderer process, and another bug to break out of the sandbox. The first stage is often the easier one. The attacker has lots of influence in the renderer process. It's easy to arrange memory in a specific way, and the renderer process acts upon many different kinds of web content, giving a large “attack surface” that could potentially be exploited. The second stage, escaping the renderer sandbox, is trickier. Attackers have two options how to do this: They can exploit a bug in the underlying operating system (OS) through the limited interfaces available inside Chrome's sandbox. Or, they can exploit a bug in a more powerful, privileged part of Chrome - like the “browser” process. This process coordinates all the other bits of Chrome, so fundamentally has to be all-powerful. We imagine the attackers squeezing through the narrow part of a funnel: Vulnerability Threat Guideline
GoogleSec.webp 2022-09-08 12:00:15 Fuzzing beyond memory corruption: Finding broader classes of vulnerabilities automatically (lien direct) Posted by Jonathan Metzman, Dongge Liu and Oliver Chang, Google Open Source Security Team Recently, OSS-Fuzz-our community fuzzing service that regularly checks 700 critical open source projects for bugs-detected a serious vulnerability (CVE-2022-3008): a bug in the TinyGLTF project that could have allowed attackers to execute malicious code in projects using TinyGLTF as a dependency. The bug was soon patched, but the wider significance remains: OSS-Fuzz caught a trivially exploitable command injection vulnerability. This discovery shows that fuzzing, a type of testing once primarily known for detecting memory corruption vulnerabilities in C/C++ code, has considerable untapped potential to find broader classes of vulnerabilities. Though the TinyGLTF library is written in C++, this vulnerability is easily applicable to all programming languages and confirms that fuzzing is a beneficial and necessary testing method for all software projects. Fuzzing as a public service OSS-Fuzz was launched in 2016 in response to the Heartbleed vulnerability, discovered in one of the most popular open source projects for encrypting web traffic. The vulnerability had the potential to affect almost every internet user, yet was caused by a relatively simple memory buffer overflow bug that could have been detected by fuzzing-that is, by running the code on randomized inputs to intentionally cause unexpected behaviors or crashes that signal bugs. At the time, though, fuzzing was not widely used and was cumbersome for developers, requiring extensive manual effort. Google created OSS-Fuzz to fill this gap: it's a free service that runs fuzzers for open source projects and privately alerts developers to the bugs detected. Since its launch, OSS-Fuzz has become a critical service for the open source community, helping get more than 8,000 security vulnerabilities and more than 26,000 other bugs in open source projects fixed. With time, OSS-Fuzz has grown beyond C/C++ to detect problems in memory-safe languages such as Go, Rust, and Python. Google Cloud's Assured Open Source Software Service, which provides organizations a secure and curated set of open source dependencies, relies on OSS-Fuzz as a foundational layer of security scanning. OSS-Fuzz is also the basis for free fuzzing tools for the community, such as ClusterFuzzLite, which gives developers a streamlined way to fuzz both open source and proprietary code before committing changes to their projects. All of these efforts are part of Google's $10B commitment to improving cybersecurity and continued work to make open source software more secure for everyone. New classes of vulnerabilities Last December, OSS-Fuzz Vulnerability
Last update at: 2024-05-16 12:07:56
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter