What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
GoogleSec.webp 2024-05-15 12:59:21 E / S 2024: Ce qui est nouveau dans la sécurité et la confidentialité d'Android
I/O 2024: What\\'s new in Android security and privacy
(lien direct)
Posted by Dave Kleidermacher, VP Engineering, Android Security and Privacy Our commitment to user safety is a top priority for Android. We\'ve been consistently working to stay ahead of the world\'s scammers, fraudsters and bad actors. And as their tactics evolve in sophistication and scale, we continually adapt and enhance our advanced security features and AI-powered protections to help keep Android users safe. In addition to our new suite of advanced theft protection features to help keep your device and data safe in the case of theft, we\'re also focusing increasingly on providing additional protections against mobile financial fraud and scams. Today, we\'re announcing more new fraud and scam protection features coming in Android 15 and Google Play services updates later this year to help better protect users around the world. We\'re also sharing new tools and policies to help developers build safer apps and keep their users safe. Google Play Protect live threat detection Google Play Protect now scans 200 billion Android apps daily, helping keep more than 3 billion users safe from malware. We are expanding Play Protect\'s on-device AI capabilities with Google Play Protect live threat detection to improve fraud and abuse detection against apps that try to cloak their actions. With live threat detection, Google Play Protect\'s on-device AI will analyze additional behavioral signals related to the use of sensitive permissions and interactions with other apps and services. If suspicious behavior is discovered, Google Play Protect can send the app to Google for additional review and then warn users or disable the app if malicious behavior is confirmed. The detection of suspicious behavior is done on device in a privacy preserving way through Private Compute Core, which allows us to protect users without collecting data. Google Pixel, Honor, Lenovo, Nothing, OnePlus, Oppo, Sharp, Transsion, and other manufacturers are deploying live threat detection later this year. Stronger protections against fraud and scams We\'re also bringing additional protections to fight fraud and scams in Android 15 with two key enhancements to safeguard your information and privacy from bad apps: Protecting One-time Passwords from Malware: With the exception of a few types of apps, such as wearable companion apps, one-time passwords are now hidden from notifications, closing a common attack vector for fraud and spyware. Expanded Restricted Settings: To help protect more sensitive permissions that are commonly abused by fraudsters, we\'re expanding Android 13\'s restricted settings, which require additional user approval to enable permissions when installing an app from an Internet-sideloading source (web browsers, messaging apps or file managers). We are continuing to develop new, AI-powered protections, Malware Tool Threat Mobile Cloud
GoogleSec.webp 2024-04-30 12:14:48 Détection du vol de données du navigateur à l'aide des journaux d'événements Windows
Detecting browser data theft using Windows Event Logs
(lien direct)
Publié par Will Harris, Chrome Security Team Le modèle de processus sandboxé de Chrome \\ de se défend bien contre le contenu Web malveillant, mais il existe des limites à la façon dont l'application peut se protéger des logiciels malveillants déjà sur l'ordinateur.Les cookies et autres informations d'identification restent un objectif de grande valeur pour les attaquants, et nous essayons de lutter contre cette menace continue de plusieurs manières, notamment en travaillant sur des normes Web comme dbsc Cela aidera à perturber l'industrie du vol de cookies car l'exfiltration de ces cookies n'aura plus de valeur. Lorsqu'il n'est pas possible d'éviter le vol d'identification et de cookies par malware, la prochaine meilleure chose est de rendre l'attaque plus observable par antivirus, d'agents de détection de terminaux ou d'administrateurs d'entreprise avec des outils d'analyse de journaux de base. Ce blog décrit un ensemble de signaux à utiliser par les administrateurs système ou les agents de détection de point de terminaison qui devraient signaler de manière fiable tout accès aux données protégées du navigateur d'une autre application sur le système.En augmentant la probabilité d'une attaque détectée, cela modifie le calcul pour les attaquants qui pourraient avoir un fort désir de rester furtif et pourraient les amener à repenser ces types d'attaques contre nos utilisateurs. arrière-plan Les navigateurs basés sur le chrome sur Windows utilisent le DPAPI (API de protection des données) pour sécuriser les secrets locaux tels que les cookies, le mot de passe, etc.La protection DPAPI est basée sur une clé dérivée des informations d'identification de connexion de l'utilisateur et est conçue pour se protéger contre l'accès non autorisé aux secrets des autres utilisateurs du système ou lorsque le système est éteint.Étant donné que le secret DPAPI est lié à l'utilisateur connecté, il ne peut pas protéger contre les attaques de logiciels malveillants locaux - l'exécution de logiciels malveillants en tant qu'utilisateur ou à un niveau de privilège plus élevé peut simplement appeler les mêmes API que le navigateur pour obtenir le secret DPAPI. Depuis 2013, Chromium applique l'indicateur CryptProtect_Audit aux appels DPAPI pour demander qu'un journal d'audit soit généré lorsque le décryptage se produit, ainsi que le marquage des données en tant que détenue par le navigateur.Parce que tout le stockage de données crypté de Chromium \\ est soutenu par une clé sécurisée DPAPI, toute application qui souhaite décrypter ces données, y compris les logiciels malveillants, devrait toujours générer de manière fiable un journal d'événements clairement observable, qui peut être utilisé pour détecter ces typesd'attaques. Il y a trois étapes principales impliquées dans le profit de ce journal: Activer la connexion sur l'ordinateur exécutant Google Chrome, ou tout autre navigateur basé sur le chrome. Exporter les journaux des événements vers votre système backend. Créer une logique de détection pour détecter le vol. Ce blog montrera également comment la journalisation fonctionne dans la pratique en la testant contre un voleur de mot de passe Python. Étape 1: Activer la connexion sur le système Les événements DPAPI sont connectés à deux endroits du système.Premièrement, il y a le 4693 Événement qui peut être connecté au journal de sécurité.Cet événement peut être activé en activant "Audit l'activité DPAPI" et les étapes pour ce faire sont d Malware Tool Threat ★★
GoogleSec.webp 2024-04-29 11:59:47 Comment nous avons combattu de mauvaises applications et de mauvais acteurs en 2023
How we fought bad apps and bad actors in 2023
(lien direct)
Posted by Steve Kafka and Khawaja Shams (Android Security and Privacy Team), and Mohet Saxena (Play Trust and Safety) A safe and trusted Google Play experience is our top priority. We leverage our SAFE (see below) principles to provide the framework to create that experience for both users and developers. Here\'s what these principles mean in practice: (S)afeguard our Users. Help them discover quality apps that they can trust. (A)dvocate for Developer Protection. Build platform safeguards to enable developers to focus on growth. (F)oster Responsible Innovation. Thoughtfully unlock value for all without compromising on user safety. (E)volve Platform Defenses. Stay ahead of emerging threats by evolving our policies, tools and technology. With those principles in mind, we\'ve made recent improvements and introduced new measures to continue to keep Google Play\'s users safe, even as the threat landscape continues to evolve. In 2023, we prevented 2.28 million policy-violating apps from being published on Google Play1 in part thanks to our investment in new and improved security features, policy updates, and advanced machine learning and app review processes. We have also strengthened our developer onboarding and review processes, requiring more identity information when developers first establish their Play accounts. Together with investments in our review tooling and processes, we identified bad actors and fraud rings more effectively and banned 333K bad accounts from Play for violations like confirmed malware and repeated severe policy violations. Additionally, almost 200K app submissions were rejected or remediated to ensure proper use of sensitive permissions such as background location or SMS access. To help safeguard user privacy at scale, we partnered with SDK providers to limit sensitive data access and sharing, enhancing the privacy posture for over 31 SDKs impacting 790K+ apps. We also significantly expanded the Google Play SDK Index, which now covers the SDKs used in almost 6 million apps across the Android ecosystem. This valuable resource helps developers make better SDK choices, boosts app quality and minimizes integration risks. Protecting the Android Ecosystem Building on our success with the App Defense Alliance (ADA), we partnered with Microsoft and Meta as steering committee members in the newly restructured ADA under the Joint Development Foundation, part of the Linux Foundation family. The Alliance will support industry-wide adoption of app security best practices and guidelines, as well as countermeasures against emerging security risks. Additionally, we announced new Play Store transparency labeling to highlight VPN apps that have completed an independent security review through App Defense Alliance\'s Mobile App Security Assessment (MASA). When a user searches for VPN apps, they will now see a banner at the top of Google Play that educates them about the “Independent security review” badge in the Data safety section. This helps users see at-a-glance that a developer has prioritized security and privacy best practices and is committed to user safety. Malware Tool Threat Mobile ★★★
GoogleSec.webp 2024-04-26 18:33:10 Accélération de la réponse aux incidents en utilisant une AI générative
Accelerating incident response using generative AI
(lien direct)
Lambert Rosique and Jan Keller, Security Workflow Automation, and Diana Kramer, Alexandra Bowen and Andrew Cho, Privacy and Security Incident ResponseIntroductionAs security professionals, we\'re constantly looking for ways to reduce risk and improve our workflow\'s efficiency. We\'ve made great strides in using AI to identify malicious content, block threats, and discover and fix vulnerabilities. We also published the Secure AI Framework (SAIF), a conceptual framework for secure AI systems to ensure we are deploying AI in a responsible manner. Today we are highlighting another way we use generative AI to help the defenders gain the advantage: Leveraging LLMs (Large Language Model) to speed-up our security and privacy incidents workflows. Tool Threat Industrial Cloud ★★★
GoogleSec.webp 2024-04-23 13:15:47 Découvrir des menaces potentielles à votre application Web en tirant parti des rapports de sécurité
Uncovering potential threats to your web application by leveraging security reports
(lien direct)
Posted by Yoshi Yamaguchi, Santiago Díaz, Maud Nalpas, Eiji Kitamura, DevRel team The Reporting API is an emerging web standard that provides a generic reporting mechanism for issues occurring on the browsers visiting your production website. The reports you receive detail issues such as security violations or soon-to-be-deprecated APIs, from users\' browsers from all over the world. Collecting reports is often as simple as specifying an endpoint URL in the HTTP header; the browser will automatically start forwarding reports covering the issues you are interested in to those endpoints. However, processing and analyzing these reports is not that simple. For example, you may receive a massive number of reports on your endpoint, and it is possible that not all of them will be helpful in identifying the underlying problem. In such circumstances, distilling and fixing issues can be quite a challenge. In this blog post, we\'ll share how the Google security team uses the Reporting API to detect potential issues and identify the actual problems causing them. We\'ll also introduce an open source solution, so you can easily replicate Google\'s approach to processing reports and acting on them. How does the Reporting API work? Some errors only occur in production, on users\' browsers to which you have no access. You won\'t see these errors locally or during development because there could be unexpected conditions real users, real networks, and real devices are in. With the Reporting API, you directly leverage the browser to monitor these errors: the browser catches these errors for you, generates an error report, and sends this report to an endpoint you\'ve specified. How reports are generated and sent. Errors you can monitor with the Reporting API include: Security violations: Content-Security-Policy (CSP), Cross-Origin-Opener-Policy (COOP), Cross-Origin-Embedder-Policy (COEP) Deprecated and soon-to-be-deprecated API calls Browser interventions Permissions policy And more For a full list of error types you can monitor, see use cases and report types. The Reporting API is activated and configured using HTTP response headers: you need to declare the endpoint(s) you want the browser to send reports to, and which error types you want to monitor. The browser then sends reports to your endpoint in POST requests whose payload is a list of reports. Example setup:# Malware Tool Vulnerability Mobile Cloud ★★★
GoogleSec.webp 2024-04-18 13:40:42 Empêcher les fuites de données généatives de l'IA avec Chrome Enterprise DLP
Prevent Generative AI Data Leaks with Chrome Enterprise DLP
(lien direct)
Publié Kaleigh Rosenblat, Chrome Enterprise Senior Staff Engineer, Security Lead L'IA générative est devenue un outil puissant et populaire pour automatiser la création de contenu et des tâches simples.De la création de contenu personnalisée à la génération de code source, il peut augmenter à la fois notre productivité et notre potentiel créatif. Les entreprises souhaitent tirer parti de la puissance des LLM, comme les Gémeaux, mais beaucoup peuvent avoir des problèmes de sécurité et souhaiter plus de contrôle sur la façon dont les employés s'assurent de ces nouveaux outils.Par exemple, les entreprises peuvent vouloir s'assurer que diverses formes de données sensibles, telles que des informations personnellement identifiables (PII), des dossiers financiers et de la propriété intellectuelle interne, ne doit pas être partagé publiquement sur les plateformes d'IA génératrices.Les dirigeants de la sécurité sont confrontés au défi de trouver le bon équilibre - permettant aux employés de tirer parti de l'IA pour augmenter l'efficacité, tout en protégeant les données des entreprises. Dans cet article de blog, nous explorerons des politiques de reportage et d'application que les équipes de sécurité des entreprises peuvent mettre en œuvre dans Chrome Enterprise Premium Pour la prévention des pertes de données (DLP). 1. & nbsp; Voir les événements de connexion * pour comprendre l'utilisation des services d'IA génératifs au sein de l'organisation.Avec Chrome Enterprise \'s Reporting Connector , la sécurité et les équipes informatiques peuvent voir quand unL'utilisateur se connecte avec succès dans un domaine spécifique, y compris les sites Web d'IA génératifs.Les équipes d'opérations de sécurité peuvent tirer parti de cette télémétrie pour détecter les anomalies et les menaces en diffusant les données dans chronique ou autre tiers siems sans frais supplémentaires. 2. & nbsp; Activer le filtrage URL pour avertir les utilisateurs des politiques de données sensibles et les laisser décider s'ils souhaitent ou non accéder à l'URL, ou pour empêcher les utilisateurs de passer à certains groupes de sites. Par exemple, avec le filtrage d'URL de l'entreprise Chrome, les administrateurs informatiques peuvent créer des règlesqui avertissent les développeurs de ne pas soumettre le code source à des applications ou outils AI génératifs spécifiques, ou de les bloquer. 3. & nbsp; avertir, bloquer ou surveiller les actions de données sensibles dans les sites Web d'IA génératifs avec des règles dynamiques basées sur le contenu pour des actions telles que la pâte, les téléchargements / téléchargements de fichiers et l'impression. & nbsp; Chrome Enterprise DLP DLPRègles Donnez aux administrateurs des administrateurs des activités granulaires sur les activités du navigateur, telles que la saisie des informations financières dans les sites Web de Gen AI.Les administrateurs peuvent personnaliser les règles du DLP pour restre Tool Legislation ★★★
GoogleSec.webp 2024-03-28 18:16:18 Adressez désinfectant pour le firmware à métal nu
Address Sanitizer for Bare-metal Firmware
(lien direct)
Posted by Eugene Rodionov and Ivan Lozano, Android Team With steady improvements to Android userspace and kernel security, we have noticed an increasing interest from security researchers directed towards lower level firmware. This area has traditionally received less scrutiny, but is critical to device security. We have previously discussed how we have been prioritizing firmware security, and how to apply mitigations in a firmware environment to mitigate unknown vulnerabilities. In this post we will show how the Kernel Address Sanitizer (KASan) can be used to proactively discover vulnerabilities earlier in the development lifecycle. Despite the narrow application implied by its name, KASan is applicable to a wide-range of firmware targets. Using KASan enabled builds during testing and/or fuzzing can help catch memory corruption vulnerabilities and stability issues before they land on user devices. We\'ve already used KASan in some firmware targets to proactively find and fix 40+ memory safety bugs and vulnerabilities, including some of critical severity. Along with this blog post we are releasing a small project which demonstrates an implementation of KASan for bare-metal targets leveraging the QEMU system emulator. Readers can refer to this implementation for technical details while following the blog post. Address Sanitizer (ASan) overview Address sanitizer is a compiler-based instrumentation tool used to identify invalid memory access operations during runtime. It is capable of detecting the following classes of temporal and spatial memory safety bugs: out-of-bounds memory access use-after-free double/invalid free use-after-return ASan relies on the compiler to instrument code with dynamic checks for virtual addresses used in load/store operations. A separate runtime library defines the instrumentation hooks for the heap memory and error reporting. For most user-space targets (such as aarch64-linux-android) ASan can be enabled as simply as using the -fsanitize=address compiler option for Clang due to existing support of this target both in the toolchain and in the libclang_rt runtime. However, the situation is rather different for bare-metal code which is frequently built with the none system targets, such as arm-none-eabi. Unlike traditional user-space programs, bare-metal code running inside an embedded system often doesn\'t have a common runtime implementation. As such, LLVM can\'t provide a default runtime for these environments. To provide custom implementations for the necessary runtime routines, the Clang toolchain exposes an interface for address sanitization through the -fsanitize=kernel-address compiler option. The KASan runtime routines implemented in the Linux kernel serve as a great example of how to define a KASan runtime for targets which aren\'t supported by default with -fsanitize=address. We\'ll demonstrate how to use the version of address sanitizer originally built for the kernel on other bare-metal targets. KASan 101 Let\'s take a look at the KASan major building blocks from a high-level perspective (a thorough explanation of how ASan works under-the-hood is provided in this whitepaper). The main idea behind KASan is that every memory access operation, such as load/store instructions and memory copy functions (for example, memm Tool Vulnerability Mobile Technical ★★
GoogleSec.webp 2024-02-05 11:59:31 Amélioration de l'interopérabilité entre la rouille et le C ++
Improving Interoperability Between Rust and C++
(lien direct)
Publié par Lars Bergstrom & # 8211;Directeur, Android Platform Tools & amp;Bibliothèques et présidente du Rust Foundation Board En 2021, nous annoncé que Google rejoignait la Fondation Rust.À l'époque, Rust était déjà largement utilisée sur Android et d'autres produits Google.Notre annonce a souligné notre engagement à améliorer les examens de sécurité du code de la rouille et son interopérabilité avec le code C ++.La rouille est l'un des outils les plus forts que nous avons pour résoudre les problèmes de sécurité de la sécurité mémoire.Depuis cette annonce, les leaders de l'industrie et agences gouvernementales sentiment. Nous sommes ravis d'annoncer que Google a fourni une subvention de 1 million de dollars à la Rust Foundation pour soutenir les efforts qui amélioreront la capacité de Rust Code à interopérer avec les bases de code C ++ héritées existantes.Nous réapparaisons également notre engagement existant envers la communauté de la rouille open source en agrégant et en publiant Audits pour les caisses de rouille que nous utilisons dans les projets Google open-source.Ces contributions, ainsi que notre contributions précédentes à l'interopérabilité , ont-ellesenthousiasmé par l'avenir de la rouille. "Sur la base des statistiques historiques de la densité de la densité de vulnérabilité, Rust a empêché de manière proactive des centaines de vulnérabilités d'avoir un impact sur l'écosystème Android.Cet investissement vise à étendre l'adoption de la rouille sur divers composants de la plate-forme. » & # 8211;Dave Kleidermacher, vice-président de l'ingénierie, Android Security & AMP;Confidentialité Bien que Google ait connu la croissance la plus importante de l'utilisation de la rouille dans Android, nous continuons à augmenter son utilisation sur plus d'applications, y compris les clients et le matériel de serveur. «Bien que la rouille ne soit pas adaptée à toutes les applications de produits, la priorisation de l'interopérabilité transparente avec C ++ accélérera l'adoption de la communauté plus large, s'alignant ainsi sur les objectifs de l'industrie d'améliorer la sécurité mémoire.» & # 8211;Royal Hansen, vice-président de Google de la sécurité et de l'AMP;Sécurité L'outillage de rouille et l'écosystème prennent déjà en charge interopérabilité avec Android et avec un investissement continuDans des outils comme cxx , autocxx , bindgen , cbindgen , diplomate , et crubit, nous constatons des améliorations régulières de l'état d'interopérabilité de la rouille avec C ++.Au fur et à mesure que ces améliorations se sont poursuivies, nous avons constaté une réduction des obstacles à l'adoption et à l'adoption accélérée de la rouille.Bien que ces progrès à travers les nombreux outils se poursuivent, il ne se fait souvent que développer progressivement pour répondre aux besoins particuliers d'un projet ou d'une entreprise donnée. Afin d'accélérer à la fois l'adoption de la rouill Tool Vulnerability Mobile ★★★
GoogleSec.webp 2024-01-11 14:18:14 MiraclePtr: protéger les utilisateurs contre les vulnérabilités sans utilisation sans plateformes
MiraclePtr: protecting users from use-after-free vulnerabilities on more platforms
(lien direct)
Posted by Keishi Hattori, Sergei Glazunov, Bartek Nowierski on behalf of the MiraclePtr team Welcome back to our latest update on MiraclePtr, our project to protect against use-after-free vulnerabilities in Google Chrome. If you need a refresher, you can read our previous blog post detailing MiraclePtr and its objectives. More platforms We are thrilled to announce that since our last update, we have successfully enabled MiraclePtr for more platforms and processes: In June 2022, we enabled MiraclePtr for the browser process on Windows and Android. In September 2022, we expanded its coverage to include all processes except renderer processes. In June 2023, we enabled MiraclePtr for ChromeOS, macOS, and Linux. Furthermore, we have changed security guidelines to downgrade MiraclePtr-protected issues by one severity level! Evaluating Security Impact First let\'s focus on its security impact. Our analysis is based on two primary information sources: incoming vulnerability reports and crash reports from user devices. Let\'s take a closer look at each of these sources and how they inform our understanding of MiraclePtr\'s effectiveness. Bug reports Chrome vulnerability reports come from various sources, such as: Chrome Vulnerability Reward Program participants, our fuzzing infrastructure, internal and external teams investigating security incidents. For the purposes of this analysis, we focus on vulnerabilities that affect platforms where MiraclePtr was enabled at the time the issues were reported. We also exclude bugs that occur inside a sandboxed renderer process. Since the initial launch of MiraclePtr in 2022, we have received 168 use-after-free reports matching our criteria. What does the data tell us? MiraclePtr effectively mitigated 57% of these use-after-free vulnerabilities in privileged processes, exceeding our initial estimate of 50%. Reaching this level of effectiveness, however, required additional work. For instance, we not only rewrote class fields to use MiraclePtr, as discussed in the previous post, but also added MiraclePtr support for bound function arguments, such as Unretained pointers. These pointers have been a significant source of use-after-frees in Chrome, and the additional protection allowed us to mitigate 39 more issues. Moreover, these vulnerability reports enable us to pinpoint areas needing improvement. We\'re actively working on adding support for select third-party libraries that have been a source of use-after-free bugs, as well as developing a more advanced rewriter tool that can handle transformations like converting std::vector into std::vector. We\'ve also made sever Tool Vulnerability Threat Mobile ★★★
GoogleSec.webp 2023-12-12 12:00:09 Durcissant les bandes de base cellulaire dans Android
Hardening cellular basebands in Android
(lien direct)
Posted by Ivan Lozano and Roger Piqueras Jover Android\'s defense-in-depth strategy applies not only to the Android OS running on the Application Processor (AP) but also the firmware that runs on devices. We particularly prioritize hardening the cellular baseband given its unique combination of running in an elevated privilege and parsing untrusted inputs that are remotely delivered into the device. This post covers how to use two high-value sanitizers which can prevent specific classes of vulnerabilities found within the baseband. They are architecture agnostic, suitable for bare-metal deployment, and should be enabled in existing C/C++ code bases to mitigate unknown vulnerabilities. Beyond security, addressing the issues uncovered by these sanitizers improves code health and overall stability, reducing resources spent addressing bugs in the future. An increasingly popular attack surface As we outlined previously, security research focused on the baseband has highlighted a consistent lack of exploit mitigations in firmware. Baseband Remote Code Execution (RCE) exploits have their own categorization in well-known third-party marketplaces with a relatively low payout. This suggests baseband bugs may potentially be abundant and/or not too complex to find and exploit, and their prominent inclusion in the marketplace demonstrates that they are useful. Baseband security and exploitation has been a recurring theme in security conferences for the last decade. Researchers have also made a dent in this area in well-known exploitation contests. Most recently, this area has become prominent enough that it is common to find practical baseband exploitation trainings in top security conferences. Acknowledging this trend, combined with the severity and apparent abundance of these vulnerabilities, last year we introduced updates to the severity guidelines of Android\'s Vulnerability Rewards Program (VRP). For example, we consider vulnerabilities allowing Remote Code Execution (RCE) in the cellular baseband to be of CRITICAL severity. Mitigating Vulnerability Root Causes with Sanitizers Common classes of vulnerabilities can be mitigated through the use of sanitizers provided by Clang-based toolchains. These sanitizers insert runtime checks against common classes of vulnerabilities. GCC-based toolchains may also provide some level of support for these flags as well, but will not be considered further in this post. We encourage you to check your toolchain\'s documentation. Two sanitizers included in Undefine Tool Vulnerability Threat Mobile Prediction Conference ★★★
GoogleSec.webp 2023-11-02 12:00:24 Plus de moyens pour les utilisateurs d'identifier les applications testées sur la sécurité indépendante sur Google Play
More ways for users to identify independently security tested apps on Google Play
(lien direct)
Posted by Nataliya Stanetsky, Android Security and Privacy Team Keeping Google Play safe for users and developers remains a top priority for Google. As users increasingly prioritize their digital privacy and security, we continue to invest in our Data Safety section and transparency labeling efforts to help users make more informed choices about the apps they use. Research shows that transparent security labeling plays a crucial role in consumer risk perception, building trust, and influencing product purchasing decisions. We believe the same principles apply for labeling and badging in the Google Play store. The transparency of an app\'s data security and privacy play a key role in a user\'s decision to download, trust, and use an app. Highlighting Independently Security Tested VPN Apps Last year, App Defense Alliance (ADA) introduced MASA (Mobile App Security Assessment), which allows developers to have their apps independently validated against a global security standard. This signals to users that an independent third-party has validated that the developers designed their apps to meet these industry mobile security and privacy minimum best practices and the developers are going the extra mile to identify and mitigate vulnerabilities. This, in turn, makes it harder for attackers to reach users\' devices and improves app quality across the ecosystem. Upon completion of the successful validation, Google Play gives developers the option to declare an “Independent security review” badge in its Data Safety section, as shown in the image below. While certification to baseline security standards does not imply that a product is free of vulnerabilities, the badge associated with these validated apps helps users see at-a-glance that a developer has prioritized security and privacy practices and committed to user safety. To help give users a simplified view of which apps have undergone an independent security validation, we\'re introducing a new Google Play store banner for specific app types, starting with VPN apps. We\'ve launched this banner beginning with VPN apps due to the sensitive and significant amount of user data these apps handle. When a user searches for VPN apps, they will now see a banner at the top of Google Play that educates them about the “Independent security review” badge in the Data Safety Section. Users also have the ability to “Learn More”, which redirects them to the App Validation Directory, a centralized place to view all VPN apps that have been independently security reviewed. Users can also discover additional technical assessment details in the App Validation Directory, helping them to make more informed decisions about what VPN apps to download, use, and trust with their data. Tool Vulnerability Mobile Technical ★★
GoogleSec.webp 2023-10-26 08:49:41 Increasing transparency in AI security (lien direct) Mihai Maruseac, Sarah Meiklejohn, Mark Lodato, Google Open Source Security Team (GOSST)New AI innovations and applications are reaching consumers and businesses on an almost-daily basis. Building AI securely is a paramount concern, and we believe that Google\'s Secure AI Framework (SAIF) can help chart a path for creating AI applications that users can trust. Today, we\'re highlighting two new ways to make information about AI supply chain security universally discoverable and verifiable, so that AI can be created and used responsibly. The first principle of SAIF is to ensure that the AI ecosystem has strong security foundations. In particular, the software supply chains for components specific to AI development, such as machine learning models, need to be secured against threats including model tampering, data poisoning, and the production of harmful content. Even as machine learning and artificial intelligence continue to evolve rapidly, some solutions are now within reach of ML creators. We\'re building on our prior work with the Open Source Security Foundation to show how ML model creators can and should protect against ML supply chain attacks by using Malware Tool Vulnerability Threat Cloud ★★
GoogleSec.webp 2023-10-26 08:00:33 Google\\'s reward criteria for reporting bugs in AI products (lien direct) Eduardo Vela, Jan Keller and Ryan Rinaldi, Google Engineering In September, we shared how we are implementing the voluntary AI commitments that we and others in industry made at the White House in July. One of the most important developments involves expanding our existing Bug Hunter Program to foster third-party discovery and reporting of issues and vulnerabilities specific to our AI systems. Today, we\'re publishing more details on these new reward program elements for the first time. Last year we issued over $12 million in rewards to security researchers who tested our products for vulnerabilities, and we expect today\'s announcement to fuel even greater collaboration for years to come. What\'s in scope for rewards In our recent AI Red Team report, we identified common tactics, techniques, and procedures (TTPs) that we consider most relevant and realistic for real-world adversaries to use against AI systems. The following table incorporates shared learnings from Tool Vulnerability ★★
GoogleSec.webp 2023-10-10 15:39:40 Échelle au-delà ducorp avec les politiques de contrôle d'accès assistées par l'IA
Scaling BeyondCorp with AI-Assisted Access Control Policies
(lien direct)
Ayush Khandelwal, Software Engineer, Michael Torres, Security Engineer, Hemil Patel, Technical Product Expert, Sameer Ladiwala, Software EngineerIn July 2023, four Googlers from the Enterprise Security and Access Security organizations developed a tool that aimed at revolutionizing the way Googlers interact with Access Control Lists - SpeakACL. This tool, awarded the Gold Prize during Google\'s internal Security & AI Hackathon, allows developers to create or modify security policies using simple English instructions rather than having to learn system-specific syntax or complex security principles. This can save security and product teams hours of time and effort, while helping to protect the information of their users by encouraging the reduction of permitted access by adhering to the principle of least privilege.Access Control Policies in BeyondCorpGoogle requires developers and owners of enterprise applications to define their own access control policies, as described in BeyondCorp: The Access Proxy. We have invested in reducing the difficulty of self-service ACL and ACL test creation to encourage these service owners to define least privilege access control policies. However, it is still challenging to concisely transform their intent into the language acceptable to the access control engine. Additional complexity is added by the variety of engines, and corresponding policy definition languages that target different access control domains (i.e. websites, networks, RPC servers).To adequately implement an access control policy, service developers are expected to learn various policy definition languages and their associated syntax, in addition to sufficiently understanding security concepts. As this takes time away from core developer work, it is not the most efficient use of developer time. A solution was required to remove these challenges so developers can focus on building innovative tools and products.Making it WorkWe built a prototype interface for interactively defining and modifying access control policies for the Tool Vulnerability ★★★
GoogleSec.webp 2023-10-09 12:30:13 Rust à métal nu dans Android
Bare-metal Rust in Android
(lien direct)
Posted by Andrew Walbran, Android Rust Team Last year we wrote about how moving native code in Android from C++ to Rust has resulted in fewer security vulnerabilities. Most of the components we mentioned then were system services in userspace (running under Linux), but these are not the only components typically written in memory-unsafe languages. Many security-critical components of an Android system run in a “bare-metal” environment, outside of the Linux kernel, and these are historically written in C. As part of our efforts to harden firmware on Android devices, we are increasingly using Rust in these bare-metal environments too. To that end, we have rewritten the Android Virtualization Framework\'s protected VM (pVM) firmware in Rust to provide a memory safe foundation for the pVM root of trust. This firmware performs a similar function to a bootloader, and was initially built on top of U-Boot, a widely used open source bootloader. However, U-Boot was not designed with security in a hostile environment in mind, and there have been numerous security vulnerabilities found in it due to out of bounds memory access, integer underflow and memory corruption. Its VirtIO drivers in particular had a number of missing or problematic bounds checks. We fixed the specific issues we found in U-Boot, but by leveraging Rust we can avoid these sorts of memory-safety vulnerabilities in future. The new Rust pVM firmware was released in Android 14. As part of this effort, we contributed back to the Rust community by using and contributing to existing crates where possible, and publishing a number of new crates as well. For example, for VirtIO in pVM firmware we\'ve spent time fixing bugs and soundness issues in the existing virtio-drivers crate, as well as adding new functionality, and are now helping maintain this crate. We\'ve published crates for making PSCI and other Arm SMCCC calls, and for managing page tables. These are just a start; we plan to release more Rust crates to support bare-metal programming on a range of platforms. These crates are also being used outside of Android, such as in Project Oak and the bare-metal section of our Comprehensive Rust course. Training engineers Many engineers have been positively surprised by how p Tool Vulnerability ★★
GoogleSec.webp 2023-09-21 12:00:57 Échec de l'adoption de la rouille grâce à la formation
Scaling Rust Adoption Through Training
(lien direct)
Posted by Martin Geisler, Android team Android 14 is the third major Android release with Rust support. We are already seeing a number of benefits: Productivity: Developers quickly feel productive writing Rust. They report important indicators of development velocity, such as confidence in the code quality and ease of code review. Security: There has been a reduction in memory safety vulnerabilities as we shift more development to memory safe languages. These positive early results provided an enticing motivation to increase the speed and scope of Rust adoption. We hoped to accomplish this by investing heavily in training to expand from the early adopters. Scaling up from Early Adopters Early adopters are often willing to accept more risk to try out a new technology. They know there will be some inconveniences and a steep learning curve but are willing to learn, often on their own time. Scaling up Rust adoption required moving beyond early adopters. For that we need to ensure a baseline level of comfort and productivity within a set period of time. An important part of our strategy for accomplishing this was training. Unfortunately, the type of training we wanted to provide simply didn\'t exist. We made the decision to write and implement our own Rust training. Training Engineers Our goals for the training were to: Quickly ramp up engineers: It is hard to take people away from their regular work for a long period of time, so we aimed to provide a solid foundation for using Rust in days, not weeks. We could not make anybody a Rust expert in so little time, but we could give people the tools and foundation needed to be productive while they continued to grow. The goal is to enable people to use Rust to be productive members of their teams. The time constraints meant we couldn\'t teach people programming from scratch; we also decided not to teach macros or unsafe Rust in detail. Make it engaging (and fun!): We wanted people to see a lot of Rust while also getting hands-on experience. Given the scope and time constraints mentioned above, the training was necessarily information-dense. This called for an interactive setting where people could quickly ask questions to the instructor. Research shows that retention improves when people can quickly verify assumptions and practice new concepts. Make it relevant for Android: The Android-specific tooling for Rust was already documented, but we wanted to show engineers how to use it via worked examples. We also wanted to document emerging standards, such as using thiserror and anyhow crates for error handling. Finally, because Rust is a new language in the Android Platform (AOSP), we needed to show how to interoperate with existing languages such as Java and C++. With those three goals as a starting point, we looked at the existing material and available tools. Existing Material Documentation is a key value of the Rust community and there are many great resources available for learning Rust. First, there is the freely available Rust Book, which covers almost all of the language. Second, the standard library is extensively documented. Because we knew our target audience, we could make stronger assumptions than most material found online. We created the course for engineers with at least 2–3 years of coding experience in either C, C++, or Java. This allowed us to move Tool ★★
GoogleSec.webp 2023-09-15 14:11:38 Capslock: De quoi votre code est-il vraiment capable?
Capslock: What is your code really capable of?
(lien direct)
Jess McClintock and John Dethridge, Google Open Source Security Team, and Damien Miller, Enterprise Infrastructure Protection TeamWhen you import a third party library, do you review every line of code? Most software packages depend on external libraries, trusting that those packages aren\'t doing anything unexpected. If that trust is violated, the consequences can be huge-regardless of whether the package is malicious, or well-intended but using overly broad permissions, such as with Log4j in 2021. Supply chain security is a growing issue, and we hope that greater transparency into package capabilities will help make secure coding easier for everyone.Avoiding bad dependencies can be hard without appropriate information on what the dependency\'s code actually does, and reviewing every line of that code is an immense task.  Every dependency also brings its own dependencies, compounding the need for review across an expanding web of transitive dependencies. But what if there was an easy way to know the capabilities–the privileged operations accessed by the code–of your dependencies? Capslock is a capability analysis CLI tool that informs users of privileged operations (like network access and arbitrary code execution) in a given package and its dependencies. Last month we published the alpha version of Capslock for the Go language, which can analyze and report on the capabilities that are used beneath the surface of open source software.  Tool Vulnerability ★★
GoogleSec.webp 2023-08-10 12:01:36 Rendre le chrome plus sécurisé en apportant une épingle clé sur Android
Making Chrome more secure by bringing Key Pinning to Android
(lien direct)
Posted by David Adrian, Joe DeBlasio and Carlos Joan Rafael Ibarra Lopez, Chrome Security Chrome 106 added support for enforcing key pins on Android by default, bringing Android to parity with Chrome on desktop platforms. But what is key pinning anyway? One of the reasons Chrome implements key pinning is the “rule of two”. This rule is part of Chrome\'s holistic secure development process. It says that when you are writing code for Chrome, you can pick no more than two of: code written in an unsafe language, processing untrustworthy inputs, and running without a sandbox. This blog post explains how key pinning and the rule of two are related. The Rule of Two Chrome is primarily written in the C and C++ languages, which are vulnerable to memory safety bugs. Mistakes with pointers in these languages can lead to memory being misinterpreted. Chrome invests in an ever-stronger multi-process architecture built on sandboxing and site isolation to help defend against memory safety problems. Android-specific features can be written in Java or Kotlin. These languages are memory-safe in the common case. Similarly, we\'re working on adding support to write Chrome code in Rust, which is also memory-safe. Much of Chrome is sandboxed, but the sandbox still requires a core high-privilege “broker” process to coordinate communication and launch sandboxed processes. In Chrome, the broker is the browser process. The browser process is the source of truth that allows the rest of Chrome to be sandboxed and coordinates communication between the rest of the processes. If an attacker is able to craft a malicious input to the browser process that exploits a bug and allows the attacker to achieve remote code execution (RCE) in the browser process, that would effectively give the attacker full control of the victim\'s Chrome browser and potentially the rest of the device. Conversely, if an attacker achieves RCE in a sandboxed process, such as a renderer, the attacker\'s capabilities are extremely limited. The attacker cannot reach outside of the sandbox unless they can additionally exploit the sandbox itself. Without sandboxing, which limits the actions an attacker can take, and without memory safety, which removes the ability of a bug to disrupt the intended control flow of the program, the rule of two requires that the browser process does not handle untrustworthy inputs. The relative risks between sandboxed processes and the browser process are why the browser process is only allowed to parse trustworthy inputs and specific IPC messages. Trustworthy inputs are defined extremely strictly: A “trustworthy source” means that Chrome can prove that the data comes from Google. Effectively, this means that in situations where the browser process needs access to data from e Tool ★★
GoogleSec.webp 2023-08-08 11:59:13 Android 14 présente les fonctionnalités de sécurité de la connectivité cellulaire en son genre
Android 14 introduces first-of-its-kind cellular connectivity security features
(lien direct)
Posted by Roger Piqueras Jover, Yomna Nasser, and Sudhi Herle Android is the first mobile operating system to introduce advanced cellular security mitigations for both consumers and enterprises. Android 14 introduces support for IT administrators to disable 2G support in their managed device fleet. Android 14 also introduces a feature that disables support for null-ciphered cellular connectivity. Hardening network security on Android The Android Security Model assumes that all networks are hostile to keep users safe from network packet injection, tampering, or eavesdropping on user traffic. Android does not rely on link-layer encryption to address this threat model. Instead, Android establishes that all network traffic should be end-to-end encrypted (E2EE). When a user connects to cellular networks for their communications (data, voice, or SMS), due to the distinctive nature of cellular telephony, the link layer presents unique security and privacy challenges. False Base Stations (FBS) and Stingrays exploit weaknesses in cellular telephony standards to cause harm to users. Additionally, a smartphone cannot reliably know the legitimacy of the cellular base station before attempting to connect to it. Attackers exploit this in a number of ways, ranging from traffic interception and malware sideloading, to sophisticated dragnet surveillance. Recognizing the far reaching implications of these attack vectors, especially for at-risk users, Android has prioritized hardening cellular telephony. We are tackling well-known insecurities such as the risk presented by 2G networks, the risk presented by null ciphers, other false base station (FBS) threats, and baseband hardening with our ecosystem partners. 2G and a history of inherent security risk The mobile ecosystem is rapidly adopting 5G, the latest wireless standard for mobile, and many carriers have started to turn down 2G service. In the United States, for example, most major carriers have shut down 2G networks. However, all existing mobile devices still have support for 2G. As a result, when available, any mobile device will connect to a 2G network. This occurs automatically when 2G is the only network available, but this can also be remotely triggered in a malicious attack, silently inducing devices to downgrade to 2G-only connectivity and thus, ignoring any non-2G network. This behavior happens regardless of whether local operators have already sunset their 2G infrastructure. 2G networks, first implemented in 1991, do not provide the same level of security as subsequent mobile generat Malware Tool Threat Conference ★★★
GoogleSec.webp 2023-07-27 12:01:55 Les hauts et les bas de 0 jours: une année en revue des 0 jours exploités dans le monde en 2022
The Ups and Downs of 0-days: A Year in Review of 0-days Exploited In-the-Wild in 2022
(lien direct)
Maddie Stone, Security Researcher, Threat Analysis Group (TAG)This is Google\'s fourth annual year-in-review of 0-days exploited in-the-wild [2021, 2020, 2019] and builds off of the mid-year 2022 review. The goal of this report is not to detail each individual exploit, but instead to analyze the exploits from the year as a whole, looking for trends, gaps, lessons learned, and successes. Executive Summary41 in-the-wild 0-days were detected and disclosed in 2022, the second-most ever recorded since we began tracking in mid-2014, but down from the 69 detected in 2021.  Although a 40% drop might seem like a clear-cut win for improving security, the reality is more complicated. Some of our key takeaways from 2022 include:N-days function like 0-days on Android due to long patching times. Across the Android ecosystem there were multiple cases where patches were not available to users for a significant time. Attackers didn\'t need 0-day exploits and instead were able to use n-days that functioned as 0-days. Tool Vulnerability Threat Prediction Conference ★★★
GoogleSec.webp 2023-07-20 16:03:33 Sécurité de la chaîne d'approvisionnement pour Go, partie 3: décalage à gauche
Supply chain security for Go, Part 3: Shifting left
(lien direct)
Julie Qiu, Go Security & Reliability and Jonathan Metzman, Google Open Source Security TeamPreviously in our Supply chain security for Go series, we covered dependency and vulnerability management tools and how Go ensures package integrity and availability as part of the commitment to countering the rise in supply chain attacks in recent years. In this final installment, we\'ll discuss how “shift left” security can help make sure you have the security information you need, when you need it, to avoid unwelcome surprises. Shifting leftThe software development life cycle (SDLC) refers to the series of steps that a software project goes through, from planning all the way through operation. It\'s a cycle because once code has been released, the process continues and repeats through actions like coding new features, addressing bugs, and more. Shifting left involves implementing security practices earlier in the SDLC. For example, consider scanning dependencies for known vulnerabilities; many organizations do this as part of continuous integration (CI) which ensures that code has passed security scans before it is released. However, if a vulnerability is first found during CI, significant time has already been invested building code upon an insecure dependency. Shifting left in this case mea Tool Vulnerability ★★
GoogleSec.webp 2023-06-16 13:11:38 Apporter la transparence à l'informatique confidentielle avec SLSA
Bringing Transparency to Confidential Computing with SLSA
(lien direct)
Asra Ali, Razieh Behjati, Tiziano Santoro, Software EngineersEvery day, personal data, such as location information, images, or text queries are passed between your device and remote, cloud-based services. Your data is encrypted when in transit and at rest, but as potential attack vectors grow more sophisticated, data must also be protected during use by the service, especially for software systems that handle personally identifiable user data.Toward this goal, Google\'s Project Oak is a research effort that relies on the confidential computing paradigm to build an infrastructure for processing sensitive user data in a secure and privacy-preserving way: we ensure data is protected during transit, at rest, and while in use. As an assurance that the user data is in fact protected, we\'ve open sourced Project Oak code, and have introduced a transparent release process to provide publicly inspectable evidence that the application was built from that source code. This blog post introduces Oak\'s transparent release process, which relies on the SLSA framework to generate cryptographic proof of the origin of Oak\'s confidential computing stack, and together with Oak\'s remote attestation process, allows users to cryptographically verify that their personal data was processed by a trustworthy application in a secure environment.  Tool ★★
GoogleSec.webp 2023-05-25 12:00:55 API de services Google Trust ACME disponibles pour tous les utilisateurs sans frais
Google Trust Services ACME API available to all users at no cost
(lien direct)
David Kluge, Technical Program Manager, and Andy Warner, Product ManagerNobody likes preventable site errors, but they happen disappointingly often. The last thing you want your customers to see is a dreaded \'Your connection is not private\' error instead of the service they expected to reach. Most certificate errors are preventable and one of the best ways to help prevent issues is by automating your certificate lifecycle using the ACME standard. Google Trust Services now offers our ACME API to all users with a Google Cloud account (referred to as “users” here), allowing them to automatically acquire and renew publicly-trusted TLS certificates for free. The ACME API has been available as a preview and over 200 million certificates have been issued already, offering the same compatibility as major Google services like google.com or youtube.com. Tool Cloud ★★★
GoogleSec.webp 2023-05-24 12:49:28 Annonçant le lancement de Guac V0.1
Announcing the launch of GUAC v0.1
(lien direct)
Brandon Lum and Mihai Maruseac, Google Open Source Security TeamToday, we are announcing the launch of the v0.1 version of Graph for Understanding Artifact Composition (GUAC). Introduced at Kubecon 2022 in October, GUAC targets a critical need in the software industry to understand the software supply chain. In collaboration with Kusari, Purdue University, Citi, and community members, we have incorporated feedback from our early testers to improve GUAC and make it more useful for security professionals. This improved version is now available as an API for you to start developing on top of, and integrating into, your systems.The need for GUACHigh-profile incidents such as Solarwinds, and the recent 3CX supply chain double-exposure, are evidence that supply chain attacks are getting more sophisticated. As highlighted by the Tool Vulnerability Threat Yahoo ★★
GoogleSec.webp 2023-05-10 14:59:36 E / S 2023: Ce qui est nouveau dans la sécurité et la confidentialité d'Android
I/O 2023: What\\'s new in Android security and privacy
(lien direct)
Posted by Ronnie Falcon, Product Manager Android is built with multiple layers of security and privacy protections to help keep you, your devices, and your data safe. Most importantly, we are committed to transparency, so you can see your device safety status and know how your data is being used. Android uses the best of Google\'s AI and machine learning expertise to proactively protect you and help keep you out of harm\'s way. We also empower you with tools that help you take control of your privacy. I/O is a great moment to show how we bring these features and protections all together to help you stay safe from threats like phishing attacks and password theft, while remaining in charge of your personal data. Safe Browsing: faster more intelligent protection Android uses Safe Browsing to protect billions of users from web-based threats, like deceptive phishing sites. This happens in the Chrome default browser and also in Android WebView, when you open web content from apps. Safe Browsing is getting a big upgrade with a new real-time API that helps ensure you\'re warned about fast-emerging malicious sites. With the newest version of Safe Browsing, devices will do real-time blocklist checks for low reputation sites. Our internal analysis has found that a significant number of phishing sites only exist for less than ten minutes to try and stay ahead of block-lists. With this real-time detection, we expect we\'ll be able to block an additional 25 percent of phishing attempts every month in Chrome and Android1. Safe Browsing isn\'t just getting faster at warning users. We\'ve also been building in more intelligence, leveraging Google\'s advances in AI. Last year, Chrome browser on Android and desktop started utilizing a new image-based phishing detection machine learning model to visually inspect fake sites that try to pass themselves off as legitimate log-in pages. By leveraging a TensorFlow Lite model, we\'re able to find 3x more2 phishing sites compared to previous machine learning models and help warn you before you get tricked into signing in. This year, we\'re expanding the coverage of the model to detect hundreds of more phishing campaigns and leverage new ML technologies. This is just one example of how we use our AI expertise to keep your data safe. Last year, Android used AI to protect users from 100 billion suspected spam messages and calls.3 Passkeys helps move users beyond passwords For many, passwords are the primary protection for their online life. In reality, they are frustrating to create, remember and are easily hacked. But hackers can\'t phish a password that doesn\'t exist. Which is why we are excited to share another major step forward in our passwordless journey: Passkeys. Spam Malware Tool ★★★
GoogleSec.webp 2023-04-26 11:00:21 Célébrer SLSA v1.0: sécuriser la chaîne d'approvisionnement des logiciels pour tout le monde
Celebrating SLSA v1.0: securing the software supply chain for everyone
(lien direct)
Bob Callaway, Staff Security Engineer, Google Open Source Security team Last week the Open Source Security Foundation (OpenSSF) announced the release of SLSA v1.0, a framework that helps secure the software supply chain. Ten years of using an internal version of SLSA at Google has shown that it\'s crucial to warding off tampering and keeping software secure. It\'s especially gratifying to see SLSA reaching v1.0 as an open source project-contributors have come together to produce solutions that will benefit everyone. SLSA for safer supply chains Developers and organizations that adopt SLSA will be protecting themselves against a variety of supply chain attacks, which have continued rising since Google first donated SLSA to OpenSSF in 2021. In that time, the industry has also seen a U.S. Executive Order on Cybersecurity and the associated NIST Secure Software Development Framework (SSDF) to guide national standards for software used by the U.S. government, as well as the Network and Information Security (NIS2) Directive in the European Union. SLSA offers not only an onramp to meeting these standards, but also a way to prepare for a climate of increased scrutiny on software development practices. As organizations benefit from using SLSA, it\'s also up to them to shoulder part of the burden of spreading these benefits to open source projects. Many maintainers of the critical open source projects that underpin the internet are volunteers; they cannot be expected to do all the work when so many of the rewards of adopting SLSA roll out across the supply chain to benefit everyone. Supply chain security for all That\'s why beyond contributing to SLSA, we\'ve also been laying the foundation to integrate supply chain solutions directly into the ecosystems and platforms used to create open source projects. We\'re also directly supporting open source maintainers, who often cite lack of time or resources as limiting factors when making security improvements to their projects. Our Open Source Security Upstream Team consists of developers who spend 100% of their time contributing to critical open source projects to make security improvements. For open source developers who choose to adopt SLSA on their own, we\'ve funded the Secure Open Source Rewards Program, which pays developers directly for these types of security improvements. Currently, open source developers who want to secure their builds can use the free SLSA L3 GitHub Builder, which requires only a one-time adjustment to the traditional build process implemented through GitHub actions. There\'s also the SLSA Verifier tool for software consumers. Users of npm-or Node Package Manager, the world\'s largest software repository-can take advantage of their recently released beta SLSA integration, which streamlines the process of creating and verifying SLSA provenance through the npm command line interface. We\'re also supporting the integration of Sigstore into many major Tool Patching ★★
GoogleSec.webp 2023-04-13 12:04:31 Sécurité de la chaîne d'approvisionnement pour GO, partie 1: Gestion de la vulnérabilité
Supply chain security for Go, Part 1: Vulnerability management
(lien direct)
Posted by Julie Qiu, Go Security & Reliability and Oliver Chang, Google Open Source Security Team High profile open source vulnerabilities have made it clear that securing the supply chains underpinning modern software is an urgent, yet enormous, undertaking. As supply chains get more complicated, enterprise developers need to manage the tidal wave of vulnerabilities that propagate up through dependency trees. Open source maintainers need streamlined ways to vet proposed dependencies and protect their projects. A rise in attacks coupled with increasingly complex supply chains means that supply chain security problems need solutions on the ecosystem level. One way developers can manage this enormous risk is by choosing a more secure language. As part of Google\'s commitment to advancing cybersecurity and securing the software supply chain, Go maintainers are focused this year on hardening supply chain security, streamlining security information to our users, and making it easier than ever to make good security choices in Go. This is the first in a series of blog posts about how developers and enterprises can secure their supply chains with Go. Today\'s post covers how Go helps teams with the tricky problem of managing vulnerabilities in their open source packages. Extensive Package Insights Before adopting a dependency, it\'s important to have high-quality information about the package. Seamless access to comprehensive information can be the difference between an informed choice and a future security incident from a vulnerability in your supply chain. Along with providing package documentation and version history, the Go package discovery site links to Open Source Insights. The Open Source Insights page includes vulnerability information, a dependency tree, and a security score provided by the OpenSSF Scorecard project. Scorecard evaluates projects on more than a dozen security metrics, each backed up with supporting information, and assigns the project an overall score out of ten to help users quickly judge its security stance (example). The Go package discovery site puts all these resources at developers\' fingertips when they need them most-before taking on a potentially risky dependency. Curated Vulnerability Information Large consumers of open source software must manage many packages and a high volume of vulnerabilities. For enterprise teams, filtering out noisy, low quality advisories and false positives from critical vulnerabilities is often the most important task in vulnerability management. If it is difficult to tell which vulnerabilities are important, it is impossible to properly prioritize their remediation. With granular advisory details, the Go vulnerability database removes barriers to vulnerability prioritization and remediation. All vulnerability database entries are reviewed and curated by the Go security team. As a result, entries are accurate and include detailed metadata to improve the quality of vulnerability scans and to make vulnerability information more actionable. This metadata includes information on affected functions, operating systems, and architectures. With this information, vulnerability scanners can reduce the number of false po Tool Vulnerability ★★
GoogleSec.webp 2023-04-11 12:11:33 Annonce de l'API DEPS.DEV: données de dépendance critiques pour les chaînes d'approvisionnement sécurisées
Announcing the deps.dev API: critical dependency data for secure supply chains
(lien direct)
Posted by Jesper Sarnesjo and Nicky Ringland, Google Open Source Security Team Today, we are excited to announce the deps.dev API, which provides free access to the deps.dev dataset of security metadata, including dependencies, licenses, advisories, and other critical health and security signals for more than 50 million open source package versions. Software supply chain attacks are increasingly common and harmful, with high profile incidents such as Log4Shell, Codecov, and the recent 3CX hack. The overwhelming complexity of the software ecosystem causes trouble for even the most diligent and well-resourced developers. We hope the deps.dev API will help the community make sense of complex dependency data that allows them to respond to-or even prevent-these types of attacks. By integrating this data into tools, workflows, and analyses, developers can more easily understand the risks in their software supply chains. The power of dependency data As part of Google\'s ongoing efforts to improve open source security, the Open Source Insights team has built a reliable view of software metadata across 5 packaging ecosystems. The deps.dev data set is continuously updated from a range of sources: package registries, the Open Source Vulnerability database, code hosts such as GitHub and GitLab, and the software artifacts themselves. This includes 5 million packages, more than 50 million versions, from the Go, Maven, PyPI, npm, and Cargo ecosystems-and you\'d better believe we\'re counting them! We collect and aggregate this data and derive transitive dependency graphs, advisory impact reports, OpenSSF Security Scorecard information, and more. Where the deps.dev website allows human exploration and examination, and the BigQuery dataset supports large-scale bulk data analysis, this new API enables programmatic, real-time access to the corpus for integration into tools, workflows, and analyses. The API is used by a number of teams internally at Google to support the security of our own products. One of the first publicly visible uses is the GUAC integration, which uses the deps.dev data to enrich SBOMs. We have more exciting integrations in the works, but we\'re most excited to see what the greater open source community builds! We see the API as being useful for tool builders, researchers, and tinkerers who want to answer questions like: What versions are available for this package? What are the licenses that cover this version of a package-or all the packages in my codebase? How many dependencies does this package have? What are they? Does the latest version of this package include changes to dependencies or licenses? What versions of what packages correspond to this file? Taken together, this information can help answer the most important overarching question: how much risk would this dependency add to my project? The API can help surface critical security information where and when developers can act. This data can be integrated into: IDE Plugins, to make dependency and security information immediately available. CI Tool Vulnerability ★★
GoogleSec.webp 2023-03-08 12:04:53 OSV and the Vulnerability Life Cycle (lien direct) Posted by Oliver Chang and Andrew Pollock, Google Open Source Security Team It is an interesting time for everyone concerned with open source vulnerabilities. The U.S. Executive Order on Improving the Nation's Cybersecurity requirements for vulnerability disclosure programs and assurances for software used by the US government will go into effect later this year. Finding and fixing security vulnerabilities has never been more important, yet with increasing interest in the area, the vulnerability management space has become fragmented-there are a lot of new tools and competing standards. In 2021, we announced the launch of OSV, a database of open source vulnerabilities built partially from vulnerabilities found through Google's OSS-Fuzz program. OSV has grown since then and now includes a widely adopted OpenSSF schema and a vulnerability scanner. In this blog post, we'll cover how these tools help maintainers track vulnerabilities from discovery to remediation, and how to use OSV together with other SBOM and VEX standards. Vulnerability Databases The lifecycle of a known vulnerability begins when it is discovered. To reach developers, the vulnerability needs to be added to a database. CVEs are the industry standard for describing vulnerabilities across all software, but there was a lack of an open source centric database. As a result, several independent vulnerability databases exist across different ecosystems. To address this, we announced the OSV Schema to unify open source vulnerability databases. The schema is machine readable, and is designed so dependencies can be easily matched to vulnerabilities using automation. The OSV Schema remains the only widely adopted schema that treats open source as a first class citizen. Since becoming a part of OpenSSF, the OSV Schema has seen adoption from services like GitHub, ecosystems such as Rust and Python, and Linux distributions such as Rocky Linux. Thanks to such wide community adoption of the OSV Schema, OSV.dev is able to provide a distributed vulnerability database and service that pulls from language specific authoritative sources. In total, the OSV.dev database now includes 43,302 vulnerabilities from 16 ecosystems as of March 2023. Users can check OSV for a comprehensive view of all known vulnerabilities in open source. Every vulnerability in OSV.dev contains package manager versions and git commit hashes, so open source users can easily determine if their packages are impacted because of the familiar style of versioning. Maintainers are also familiar with OSV's community driven and distributed collaboration on the development of OSV's database, tools, and schema. Matching The next step in managing vulnerabilities is to determine project dependencies and their associated vulnerabilities. Last December we released OSV-Scanner, a free, open source tool which scans software projects' lockfiles, SBOMs, or git repositories to identify vulnerabilities found in the Tool Vulnerability ★★★★
GoogleSec.webp 2023-03-08 11:59:13 Thank you and goodbye to the Chrome Cleanup Tool (lien direct) Posted by Jasika Bawa, Chrome Security Team Starting in Chrome 111 we will begin to turn down the Chrome Cleanup Tool, an application distributed to Chrome users on Windows to help find and remove unwanted software (UwS). Origin story The Chrome Cleanup Tool was introduced in 2015 to help users recover from unexpected settings changes, and to detect and remove unwanted software. To date, it has performed more than 80 million cleanups, helping to pave the way for a cleaner, safer web. A changing landscape In recent years, several factors have led us to reevaluate the need for this application to keep Chrome users on Windows safe. First, the user perspective – Chrome user complaints about UwS have continued to fall over the years, averaging out to around 3% of total complaints in the past year. Commensurate with this, we have observed a steady decline in UwS findings on users' machines. For example, last month just 0.06% of Chrome Cleanup Tool scans run by users detected known UwS. Next, several positive changes in the platform ecosystem have contributed to a more proactive safety stance than a reactive one. For example, Google Safe Browsing as well as antivirus software both block file-based UwS more effectively now, which was originally the goal of the Chrome Cleanup Tool. Where file-based UwS migrated over to extensions, our substantial investments in the Chrome Web Store review process have helped catch malicious extensions that violate the Chrome Web Store's policies. Finally, we've observed changing trends in the malware space with techniques such as Cookie Theft on the rise – as such, we've doubled down on defenses against such malware via a variety of improvements including hardened authentication workflows and advanced heuristics for blocking phishing and social engineering emails, malware landing pages, and downloads. What to expect Starting in Chrome 111, users will no longer be able to request a Chrome Cleanup Tool scan through Safety Check or leverage the "Reset settings and cleanup" option offered in chrome://settings on Windows. Chrome will also remove the component that periodically scans Windows machines and prompts users for cleanup should it find anything suspicious. Even without the Chrome Cleanup Tool, users are automatically protected by Safe Browsing in Chrome. Users also have the option to turn on Enhanced protection by navigating to chrome://settings/security – this mode substantially increases protection from dangerous websites and downloads by sharing real-time data with Safe Browsing. While we'll miss the Chrome Cleanup Tool, we wanted to take this opportunity to acknowledge its role in combating UwS for the past 8 years. We'll continue to monitor user feedback and trends in the malware ecosystem, and when adversaries adapt their techniques again – which they will – we'll be at the ready. As always, please feel free to send us feedback or find us on Twitter @googlechrome. Malware Tool ★★★
GoogleSec.webp 2023-03-01 11:59:44 8 ways to secure Chrome browser for Google Workspace users (lien direct) Posted by Kiran Nair, Product Manager, Chrome Browser Your journey towards keeping your Google Workspace users and data safe, starts with bringing your Chrome browsers under Cloud Management at no additional cost. Chrome Browser Cloud Management is a single destination for applying Chrome Browser policies and security controls across Windows, Mac, Linux, iOS and Android. You also get deep visibility into your browser fleet including which browsers are out of date, which extensions your users are using and bringing insight to potential security blindspots in your enterprise. Managing Chrome from the cloud allows Google Workspace admins to enforce enterprise protections and policies to the whole browser on fully managed devices, which no longer requires a user to sign into Chrome to have policies enforced. You can also enforce policies that apply when your managed users sign in to Chrome browser on any Windows, Mac, or Linux computer (via Chrome Browser user-level management) --not just on corporate managed devices. This enables you to keep your corporate data and users safe, whether they are accessing work resources from fully managed, personal, or unmanaged devices used by your vendors. Getting started is easy. If your organization hasn't already, check out this guide for steps on how to enroll your devices. 2. Enforce built-in protections against Phishing, Ransomware & Malware Chrome uses Google's Safe Browsing technology to help protect billions of devices every day by showing warnings to users when they attempt to navigate to dangerous sites or download dangerous files. Safe Browsing is enabled by default for all users when they download Chrome. As an administrator, you can prevent your users from disabling Safe Browsing by enforcing the SafeBrowsingProtectionLevel policy. Over the past few years, we've seen threats on the web becoming increasingly sophisticated. Turning on Enhanced Safe Browsing will substantially increase protection Ransomware Malware Tool Threat Guideline Cloud ★★★
GoogleSec.webp 2023-02-01 13:00:49 Taking the next step: OSS-Fuzz in 2023 (lien direct) Posted by Oliver Chang, OSS-Fuzz team Since launching in 2016, Google's free OSS-Fuzz code testing service has helped get over 8800 vulnerabilities and 28,000 bugs fixed across 850 projects. Today, we're happy to announce an expansion of our OSS-Fuzz Rewards Program, plus new features in OSS-Fuzz and our involvement in supporting academic fuzzing research. Refreshed OSS-Fuzz rewards The OSS-Fuzz project's purpose is to support the open source community in adopting fuzz testing, or fuzzing - an automated code testing technique for uncovering bugs in software. In addition to the OSS-Fuzz service, which provides a free platform for continuous fuzzing to critical open source projects, we established an OSS-Fuzz Reward Program in 2017 as part of our wider Patch Rewards Program. We've operated this successfully for the past 5 years, and to date, the OSS-Fuzz Reward Program has awarded over $600,000 to over 65 different contributors for their help integrating new projects into OSS-Fuzz. Today, we're excited to announce that we've expanded the scope of the OSS-Fuzz Reward Program considerably, introducing many new types of rewards! These new reward types cover contributions such as: Project fuzzing coverage increases Notable FuzzBench fuzzer integrations Integrating a new sanitizer (example) that finds two new vulnerabilities These changes boost the total rewards possible per project integration from a maximum of $20,000 to $30,000 (depending on the criticality of the project). In addition, we've also established two new reward categories that reward wider improvements across all OSS-Fuzz projects, with up to $11,337 available per category. For more details, see the fully updated rules for our dedicated OSS-Fuzz Reward Program. OSS-Fuzz improvements We've continuously made improvements to OSS-Fuzz's infrastructure over the years and expanded our language offerings to cover C/C++, Go, Rust, Java, Python, and Swift, and have introduced support for new frameworks such as FuzzTest. Additionally, as part of an ongoing collaboration with Code Intelligence, we'll soon have support for JavaScript fuzzing through Jazzer.js. FuzzIntrospector support Last year, we launched the OpenSSF FuzzIntrospector tool and integrated it into OSS-Fuzz. We've continued to build on this by adding new language support and better analysis, and now C/C++, Python, and Java projects integrated into OSS-Fuzz have detailed insights on how the coverage and fuzzing effectiveness for a project can be improved. The Tool ★★★★★
GoogleSec.webp 2022-12-13 13:00:47 Announcing OSV-Scanner: Vulnerability Scanner for Open Source (lien direct) Posted by Rex Pan, software engineer, Google Open Source Security TeamToday, we're launching the OSV-Scanner, a free tool that gives open source developers easy access to vulnerability information relevant to their project. Last year, we undertook an effort to improve vulnerability triage for developers and consumers of open source software. This involved publishing the Open Source Vulnerability (OSV) schema and launching the OSV.dev service, the first distributed open source vulnerability database. OSV allows all the different open source ecosystems and vulnerability databases to publish and consume information in one simple, precise, and machine readable format. The OSV-Scanner is the next step in this effort, providing an officially supported frontend to the OSV database that connects a project's list of dependencies with the vulnerabilities that affect them. OSV-Scanner Software projects are commonly built on top of a mountain of dependencies-external software libraries you incorporate into a project to add functionalities without developing them from scratch. Each dependency potentially contains existing known vulnerabilities or new vulnerabilities that could be discovered at any time. There are simply too many dependencies and versions to keep track of manually, so automation is required. Scanners provide this automated capability by matching your code and dependencies against lists of known vulnerabilities and notifying you if patches or updates are needed. Scanners bring incredible benefits to project security, which is why the 2021 U.S. Executive Order for Cybersecurity included this type of automation as a requirement for national standards on secure software development.The OSV-Scanner generates reliable, high-quality vulnerability information that closes the gap between a developer's list of packages and the information in vulnerability databases. Since the OSV.dev database is open source and distributed, it has several benefits in comparison with closed source advisory databases and scanners: Each advisory comes from an open and authoritative source (e.g. the RustSec Advisory Database) Anyone can suggest improvements to advisories, resulting in a very high quality database The OSV format unambiguously stores information about affected versions in a machine-readable format that precisely maps onto a developer's list of packages The above all results in fewer, more actionable vulnerability notifications, which reduces the time needed to resolve them Running OSV-Scanner on your project will first find all the transitive dependencies that are being used by analyzing manifests, SBOMs, and commit hashes. The scanner then connects this information with the OSV database and displays the vulnerabilities relevant to your project. Tool Vulnerability ★★★
GoogleSec.webp 2022-10-20 13:01:02 Announcing GUAC, a great pairing with SLSA (and SBOM)! (lien direct) Posted by Brandon Lum, Mihai Maruseac, Isaac Hepworth, Google Open Source Security Team Supply chain security is at the fore of the industry's collective consciousness. We've recently seen a significant rise in software supply chain attacks, a Log4j vulnerability of catastrophic severity and breadth, and even an Executive Order on Cybersecurity. It is against this background that Google is seeking contributors to a new open source project called GUAC (pronounced like the dip). GUAC, or Graph for Understanding Artifact Composition, is in the early stages yet is poised to change how the industry understands software supply chains. GUAC addresses a need created by the burgeoning efforts across the ecosystem to generate software build, security, and dependency metadata. True to Google's mission to organize and make the world's information universally accessible and useful, GUAC is meant to democratize the availability of this security information by making it freely accessible and useful for every organization, not just those with enterprise-scale security and IT funding. Thanks to community collaboration in groups such as OpenSSF, SLSA, SPDX, CycloneDX, and others, organizations increasingly have ready access to: Software Bills of Materials (SBOMs) (with SPDX-SBOM-Generator, Syft, kubernetes bom tool) signed attestations about how software was built (e.g. SLSA with SLSA3 Github Actions Builder, Google Cloud Build) vulnerability databases that aggregate information across ecosystems and make vulnerabilities more discoverable and actionable (e.g. OSV.dev, Global Security Database (GSD)). These data are useful on their own, but it's difficult to combine and synthesize the information for a more comprehensive view. The documents are scattered across different databases and producers, are attached to different ecosystem entities, and cannot be easily aggregated to answer higher-level questions about an organization's software assets. To help address this issue we've teamed up with Kusari, Purdue University, and Citi to create GUAC, a free tool to bring together many different sources of software security metadata. We're excited to share the project's proof of concept, which lets you query a small dataset of software metadata including SLSA provenance, SBOMs, and OpenSSF Scorecards. What is GUAC Graph for Understanding Artifact Composition (GUAC) aggregates software security metadata into a high fidelity graph database-normalizing entity identities and mapping standard relationships between them. Querying this graph can drive higher-level organizational outcomes such as audit, policy, risk management, and even developer assistance. Conceptually, GUAC occupies the “aggregation and synthesis” layer of the software supply chain transparency logical model: Tool Vulnerability Uber
GoogleSec.webp 2022-06-14 12:00:00 SBOM in Action: finding vulnerabilities with a Software Bill of Materials (lien direct) Posted by Brandon Lum and Oliver Chang, Google Open Source Security TeamThe past year has seen an industry-wide effort to embrace Software Bills of Materials (SBOMs)-a list of all the components, libraries, and modules that are required to build a piece of software. In the wake of the 2021 Executive Order on Cybersecurity, these ingredient labels for software became popular as a way to understand what's in the software we all consume. The guiding idea is that it's impossible to judge the risks of particular software without knowing all of its components-including those produced by others. This increased interest in SBOMs saw another boost after the National Institute of Standards and Technology (NIST) released its Secure Software Development Framework, which requires SBOM information to be available for software. But now that the industry is making progress on methods to generate and share SBOMs, what do we do with them?Generating an SBOM is only one half of the story. Once an SBOM is available for a given piece of software, it needs to be mapped onto a list of known vulnerabilities to know which components could pose a threat. By connecting these two sources of information, consumers will know not just what's in what's in their software, but also its risks and whether they need to remediate any issues.In this blog post, we demonstrate the process of taking an SBOM from a large and critical project-Kubernetes-and using an open source tool to identify the vulnerabilities it contains. Our example's success shows that we don't need to wait for SBOM generation to reach full maturity before we begin mapping SBOMs to common vulnerability databases. With just a few updates from SBOM creators to address current limitations in connecting the two sources of data, this process is poised to become easily within reach of the average software consumer. OSV: Connecting SBOMs to vulnerabilitiesThe following example uses Kubernetes, a major project that makes its SBOM available using the Software Package Data Exchange (SPDX) format-an international open standard (ISO) for communicating SBOM information. The same idea should apply to any project that makes its SBOM available, and for projects that don't, you can generate your own SBOM using the same bom tool Kubernetes created.We have chosen to map the SBOM to the Open Source Vulnerabilities (OSV) database, which describes vulnerabilities in a format that was specifically designed to map to open source package versions or commit hashes. The OSV database excels here as it provides a standardized format and aggregates information across multiple ecosystems (e.g., Python, Golang, Rust) and databases (e.g., Github Advisory Database (GHSA), Global Security Database (GSD)).To connect the SBOM to the database, we'll use the SPDX spdx-to-osv tool. This open source tool takes in an SPDX SBOM document, queries the OSV database of vulnerabilities, and returns an enumeration of vulnerabilities present in the software's declared components.Example: Kubernetes' SBOMThe first step is to download Kubernetes' SBOM, which is publicly available and contains information on the project, dependencies, versions, and Tool Vulnerability Uber
GoogleSec.webp 2022-05-26 13:53:04 Retrofitting Temporal Memory Safety on C++ (lien direct) Posted by Anton Bikineev, Michael Lippautz and Hannes Payer, Chrome security teamMemory safety in Chrome is an ever-ongoing effort to protect our users. We are constantly experimenting with different technologies to stay ahead of malicious actors. In this spirit, this post is about our journey of using heap scanning technologies to improve memory safety of C++.Let's start at the beginning though. Throughout the lifetime of an application its state is generally represented in memory. Temporal memory safety refers to the problem of guaranteeing that memory is always accessed with the most up to date information of its structure, its type. C++ unfortunately does not provide such guarantees. While there is appetite for different languages than C++ with stronger memory safety guarantees, large codebases such as Chromium will use C++ for the foreseeable future.auto* foo = new Foo();delete foo Tool Guideline
GoogleSec.webp 2022-05-18 09:03:33 Privileged pod escalations in Kubernetes and GKE (lien direct) Posted by GKE and Anthos Platform Security Teams At the KubeCon EU 2022 conference in Valencia, security researchers from Palo Alto Networks presented research findings on “trampoline pods”-pods with an elevated set of privileges required to do their job, but that could conceivably be used as a jumping off point to gain escalated privileges.The research mentions GKE, including how developers should look at the privileged pod problem today, what the GKE team is doing to minimize the use of privileged pods, and actions GKE users can take to protect their clusters.Privileged pods within the context of GKE securityWhile privileged pods can pose a security issue, it's important to look at them within the overall context of GKE security. To use a privileged pod as a “trampoline” in GKE, there is a major prerequisite – the attacker has to first execute a successful application compromise and container breakout attack. Because the use of privileged pods in an attack requires a first step such as a container breakout to be effective, let's look at two areas:features of GKE you can use to reduce the likelihood of a container breakoutsteps the GKE team is taking to minimize the use of privileged pods and the privileges needed in them.Reducing container breakoutsThere are a number of features in GKE along with some best practices that you can use to reduce the likelihood of a container breakout:Use GKE Sandbox to strengthen the container security boundary. Over the last few months, GKE Sandbox has protected containers running it against several newly discovered Linux kernel breakout CVEs.Adopt GKE Autopilot for new clusters. Autopilot clusters have default policies that prevent host access through mechanisms like host path volumes and host network. The container runtime default seccomp profile is also enabled by default on Autopilot which has prevented several breakouts.Subscribe to GKE Release Channels and use autoupgrade to keep nodes patched automatically against kernel vulnerabilities.Run Google's Container Optimized OS, the minimal and hardened container optimized OS that makes much of the disk read-only.Incorporate binary authorization into your SDLC to require that containers admitted into the cluster are from trusted build systems and up-to-date on patching.Use Secure Command Center's Container Threat Detection or supported third-party tools to detect the most common runtime attacks.More information can be found in the GKE Hardening Guide.How GKE is reducing the use of privileged pod Tool Threat Uber
GoogleSec.webp 2022-04-13 12:35:03 How to SLSA Part 2 - The Details (lien direct) Posted by Tom  Hennen, software engineer, BCID & GOSST In our last post we introduced a fictional example of Squirrel, Oppy, and Acme learning to use SLSA and covered the basics of what their implementations might look like. Today we'll cover the details: where to store attestations and policies, what policies should check, and how to handle key distribution and trust.Attestation storageAttestations play a large role in SLSA and it's essential that consumers of artifacts know where to find the attestations for those artifacts.Co-located in repoAttestations could be colocated in the repository that hosts the artifact. This is how Squirrel plans to store attestations for packages. They even want to add support to the Squirrel CLI (e.g. acorn get-attestations foo@1.2.3).Acme really likes this approach because the attestations are always available and it doesn't introduce any new dependencies.RekorMeanwhile, Oppy plans to store attestations in Rekor. They like being able to direct users to an existing public instance while not having to maintain any new infrastructure themselves, and the in-depth defense the transparency log provides against tampering with the attestations.Though the latency of querying attestations from Rekor is likely too high for doing verification at time of use, Oppy isn't too concerned since they expect users to query Rekor at install time.HybridA hybrid model is also available where the publisher stores the attestations in Rekor as well as co-located with the artifact in the repo-along with Rekor's inclusion proof. This provides confidence the data was added to Rekor while providing the benefits of co-locating attestations in the repository.Policy content'Policy' refers to the rules used to determine if an artifact should be allowed for a use case.Policies often use the package name as a proxy for determining the use case. An example being, if you want to find the policy to apply you could look up the policy using the package name of the artifact you're evaluating.Policy specifics may vary based on ease of use, availability of data, risk tolerance and more. Full verification needs more from policies than delegated verification does.Default policyDefault policies allow admission decisions without the need to create specific policies for each package. A default policy is a way of saying “anything that doesn't have a more specific policy must comply with this policy”.Squirrel plans to eventually implement a default policy of “any package without a more specific policy will be accepted as long as it meets SLSA 3”, but they recognize that most packages don't support this yet. Until they achieve critical mass they'll have a default SLSA 0 policy (all artifacts are accepted).While Oppy is leaving verification to their users, they'll suggest a default policy of “any package built by 'https://oppy.example/slsa/builder/v1'”.Specific policySquirrel also plans to allow users to create policies for specific packages. For example, this policy requires that package 'foo' must have been built by GitHub Actions, from github.com/foo/acorn-foo, and be SLSA 4. Tool
GoogleSec.webp 2022-01-19 10:00:00 Reducing Security Risks in Open Source Software at Scale: Scorecards Launches V4 (lien direct) Posted by Laurent Simon and Azeem Shaikh, Google Open Source Security Team (GOSST) Since our July announcement of Scorecards V2, the Scorecards project-an automated security tool to flag risky supply chain practices in open source projects-has grown steadily to over 40 unique contributors and 18 implemented security checks. Today we are proud to announce the V4 release of Scorecards, with larger scaling, a new security check, and a new Scorecards GitHub Action for easier security automation.The Scorecards Action is released in partnership with GitHub and is available from GitHub's Marketplace. The Action makes using Scorecards easier than ever: it runs automatically on repository changes to alert developers about risky supply-chain practices. Maintainers can view the alerts on GitHub's code scanning dashboard, which is available for free to public repositories on GitHub.com and via GitHub Advanced Security for private repositories. Additionally, we have scaled our weekly Scorecards scans to over one million GitHub repositories, and have partnered with the Open Source Insights website for easy user access to the data. For more details about the release, including the new Dangerous-Workflow security check, visit the OpenSSF's official blog post here. Tool
GoogleSec.webp 2021-10-28 13:00:00 Protecting your device information with Private Set Membership (lien direct) Posted by Kevin Yeo and Sarvar Patel, Private Computing Team At Google, keeping you safe online is our top priority, so we continuously build the most advanced privacy-preserving technologies into our products. Over the past few years, we've utilized innovations in cryptographic research to keep your personal information private by design and secure by default. As part of this, we launched Password Checkup, which protects account credentials by notifying you if an entered username and password are known to have been compromised in a prior data breach. Using cryptographic techniques, Password Checkup can do this without revealing your credentials to anyone, including Google. Today, Password Checkup protects users across many platforms including Android, Chrome and Google Password Manager.Another example is Private Join and Compute, an open source protocol which enables organizations to work together and draw insights from confidential data sets. Two parties are able to encrypt their data sets, join them, and compute statistics over the joint data. By leveraging secure multi-party computation, Private Join and Compute is designed to ensure that the plaintext data sets are concealed from all parties.In this post, we introduce the next iteration of our research, Private Set Membership, as well as its open-source availability. At a high level, Private Set Membership considers the scenario in which Google holds a database of items, and user devices need to contact Google to check whether a specific item is found in the database. As an example, users may want to check membership of a computer program on a block list consisting of known malicious software before executing the program. Often, the set's contents and the queried items are sensitive, so we designed Private Set Membership to perform this task while preserving the privacy of our users. Protecting your device information during enrollmentBeginning in Chrome 94, Private Set Membership will enable Chrome OS devices to complete the enrollment process in a privacy-preserving manner. Device enrollment is an integral part of the out-of-box experience that welcomes you when getting started with a Chrome OS device. The device enrollment process requires checking membership of device information in encrypted Google databases, including checking if a device is enterprise enrolled or determining if a device was pre-packaged with a license. The correct end state of your Chrome OS device is determined using the results of these membership checks.During the enrollment process, we protect your Chrome OS devices by ensuring no information ever leaves the device that may be decrypted by anyone else when using Private Set Membership. Google will never learn any device information and devices will not learn any unnecessary information about other devices. ​​To our knowledge, this is the first instance of advanced cryptographic tools being leveraged to protect device information during the enrollment process.A deeper look at Private Set MembershipPrivate Set Membership is built upon two cryptographic tools:Homomorphic encryption is a powerful cryptographic tool that enables computation over encrypted data without the need f Tool
Last update at: 2024-05-16 17:08:31
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter