What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
GoogleSec.webp 2022-08-30 07:15:00 Announcing Google\'s Open Source Software Vulnerability Rewards Program (lien direct) Posted by Francis Perron, Open Source Security Technical Program ManagerToday, we are launching Google's Open Source Software Vulnerability Rewards Program (OSS VRP) to reward discoveries of vulnerabilities in Google's open source projects. As the maintainer of major projects such as Golang, Angular, and Fuchsia, Google is among the largest contributors and users of open source in the world. With the addition of Google's OSS VRP to our family of Vulnerability Reward Programs (VRPs), researchers can now be rewarded for finding bugs that could potentially impact the entire open source ecosystem.Google has been committed to supporting security researchers and bug hunters for over a decade. The original VRP program, established to compensate and thank those who help make Google's code more secure, was one of the first in the world and is now approaching its 12th anniversary. Over time, our VRP lineup has expanded to include programs focused on Chrome, Android, and other areas. Collectively, these programs have rewarded more Vulnerability Guideline
GoogleSec.webp 2022-08-24 13:14:56 Announcing the Open Sourcing of Paranoid\'s Library (lien direct) Posted by Pedro Barbosa, Security Engineer, and Daniel Bleichenbacher, Software EngineerParanoid is a project to detect well-known weaknesses in large amounts of crypto artifacts, like public keys and digital signatures. On August 3rd 2022 we open sourced the library containing the checks that we implemented so far (https://github.com/google/paranoid_crypto). The library is developed and maintained by members of the Google Security Team, but it is not an officially supported Google product.Why the Project?Crypto artifacts may be generated by systems with implementations unknown to us; we refer to them as “black boxes.” An artifact may be generated by a black-box if, for example, it was not generated by one of our own tools (such as Tink), or by a library that we can inspect and test using Wycheproof. Unfortunately, sometimes we end up relying on black-box generated artifacts (e.g. generated by proprietary HSMs).After the disclosure of the ROCA vulnerability Vulnerability Threat
GoogleSec.webp 2022-08-10 12:00:24 Making Linux Kernel Exploit Cooking Harder (lien direct) Posted by Eduardo Vela, Exploit CriticCover of the medieval cookbook. Title in large letters kernel Exploits. Adorned. Featuring a small penguin. 15th century. Color. High quality picture. Private collection. Detailed.The Linux kernel is a key component for the security of the Internet. Google uses Linux in almost everything, from the computers our employees use, to the products people around the world use daily like Chromebooks, Android on phones, cars, and TVs, and workloads on Google Cloud. Because of this, we have heavily invested in Linux's security - and today, we're announcing how we're building on those investments and increasing our rewards.In 2020, we launched an open-source Kubernetes-based Capture-the-Flag (CTF) project called, kCTF. The kCTF Vulnerability Rewards Program (VRP) lets researchers connect to our Google Kubernetes Engine (GKE) instances, and if they can hack it, they get a flag, and are potentially rewarded. All of GKE and its dependenci Hack Uber
GoogleSec.webp 2022-08-08 11:59:54 How Hash-Based Safe Browsing Works in Google Chrome (lien direct) By Rohit Bhatia, Mollie Bates, Google Chrome Security There are various threats a user faces when browsing the web. Users may be tricked into sharing sensitive information like their passwords with a misleading or fake website, also called phishing. They may also be led into installing malicious software on their machines, called malware, which can collect personal data and also hold it for ransom. Google Chrome, henceforth called Chrome, enables its users to protect themselves from such threats on the internet. When Chrome users browse the web with Safe Browsing protections, Chrome uses the Safe Browsing service from Google to identify and ward off various threats. Safe Browsing works in different ways depending on the user's preferences. In the most common case, Chrome uses the privacy-conscious Update API (Application Programming Interface) from the Safe Browsing service. This API was developed with user privacy in mind and ensures Google gets as little information about the user's browsing history as possible. If the user has opted-in to "Enhanced Protection" (covered in an earlier post) or "Make Searches and Browsing Better", Chrome shares limited additional data with Safe Browsing only to further improve user protection. This post describes how Chrome implements the Update API, with appropriate pointers to the technical implementation and details about the privacy-conscious aspects of the Update API. This should be useful for users to understand how Safe Browsing protects them, and for interested developers to browse through and understand the implementation. We will cover the APIs used for Enhanced Protection users in a future post. Threats on the Internet When a user navigates to a webpage on the internet, their browser fetches objects hosted on the internet. These objects include the structure of the webpage (HTML), the styling (CSS), dynamic behavior in the browser (Javascript), images, downloads initiated by the navigation, and other webpages embedded in the main webpage. These objects, also called resources, have a web address which is called their URL (Uniform Resource Locator). Further, URLs may redirect to other URLs when being loaded. Each of these URLs can potentially host threats such as phishing websites, malware, unwanted downloads, malicious software, unfair billing practices, and more. Chrome with Safe Browsing checks all URLs, redirects or included resources, to identify such threats and protect users. Safe Browsing Lists Safe Browsing provides a list for each threat it protects users against on the internet. A full catalog of lists that are used in Chrome can be found by visiting chrome://safe-browsing/#tab-db-manager on desktop platforms. A list does not contain unsafe web addresses, also referred to as URLs, in entirety; it would be prohibitively expensive to keep all of them in a device's limited memory. Instead it maps a URL, which can be very long, through a cryptographic hash function (SHA-256), to a unique fixed size string. This distinct fixed size string, called a hash, allows a list to be stored efficiently in limited memory. The Update API handles URLs only in the form of hashes and is also called hash-based API in this post. Further, a list does not store hashes in entirety either, as even that would be too memory intensive. Instead, barring a case where data is not shared with Google and the list is small, it contains prefixes of the hashes. We refer to the original hash as a full hash, and Malware Threat Guideline
GoogleSec.webp 2022-07-19 12:59:33 DNS-over-HTTP/3 in Android (lien direct) Posted by Matthew Mauer and Mike Yu, Android team To help keep Android users' DNS queries private, Android supports encrypted DNS. In addition to existing support for DNS-over-TLS, Android now supports DNS-over-HTTP/3 which has a number of improvements over DNS-over-TLS. Most network connections begin with a DNS lookup. While transport security may be applied to the connection itself, that DNS lookup has traditionally not been private by default: the base DNS protocol is raw UDP with no encryption. While the internet has migrated to TLS over time, DNS has a bootstrapping problem. Certificate verification relies on the domain of the other party, which requires either DNS itself, or moves the problem to DHCP (which may be maliciously controlled). This issue is mitigated by central resolvers like Google, Cloudflare, OpenDNS and Quad9, which allow devices to configure a single DNS resolver locally for every network, overriding what is offered through DHCP. In Android 9.0, we announced the Private DNS feature, which uses DNS-over-TLS (DoT) to protect DNS queries when enabled and supported by the server. Unfortunately, DoT incurs overhead for every DNS request. An alternative encrypted DNS protocol, DNS-over-HTTPS (DoH), is rapidly gaining traction within the industry as DoH has already been deployed by most public DNS operators, including the Cloudflare Resolver and Google Public DNS. While using HTTPS alone will not reduce the overhead significantly, HTTP/3 uses QUIC, a transport that efficiently multiplexes multiple streams over UDP using a single TLS session with session resumption. All of these features are crucial to efficient operation on mobile devices. DNS-over-HTTP/3 (DoH3) support was released as part of a Google Play system update, so by the time you're reading this, Android devices from Android 11 onwards1 will use DoH3 instead of DoT for well-known2 DNS servers which support it. Which DNS service you are using is unaffected by this change; only the transport will be upgraded. In the future, we aim to support DDR which will allow us to dynamically select the correct configuration for any server. This feature should decrease the performance impact of encrypted DNS. Performance DNS-over-HTTP/3 avoids several problems that can occur with DNS-over-TLS operation: As DoT operates on a single stream of requests and responses, many server implementations suffer from head-of-line blocking3. This means that if the request at the front of the line takes a while to resolve (possibly because a recursive resolution is necessary), responses for subsequent requests that would have otherwise been resolved quickly are blocked waiting on that first request. DoH3 by comparison runs each request over a separate logical stream, which means Studies
GoogleSec.webp 2022-07-12 13:11:21 TAG Bulletin: Q2 2022 (lien direct) Posted by Shane Huntley, Director, Threat Analysis GroupThis bulletin includes coordinated influence operation campaigns terminated on our platforms in Q2 2022. It was last updated on June 30, 2022.MayWe terminated 20 YouTube channels as part of our investigation into coordinated influence operations linked to Russia. The campaign was linked to a Russian consulting firm and was sharing content in Russian that was supportive of Russia's actions in Ukraine and Russian President Vladimir Putin and critical of NATO, Ukraine, and Ukrainian President Volodymyr Zelenskyy.We terminated 5 YouTube channels, 1 AdSense account, and 1 Blogger blog as part of our investigation into coordinated influence operations linked to Russia. The campaign was linked to Russian state-sponsored entities and was sharing content in Russian and Bulgarian that was supportive of separatist movements in the disputed regions of Ukraine and critical of Ukraine and the West.We terminated 14 YouTube channels as part of our investigation into coordinated influence operations linked to Russia. The campaign was linked to the Internet Research Agency (IRA) and was sharing content in Russian that was critical of Ukrainian politicians.We blocked 1 domain from eligibility to appear on Google News surfaces and Discover as part of our investigation into coordinated influence operations linked to Iran. The campaign was linked to Endless Mayfly and was sharing content in English about a variety of topics including US and global current events. We received leads from Mandiant that supported us in this investigation. Threat Guideline
GoogleSec.webp 2022-06-21 12:00:00 Game on! The 2022 Google CTF is here. (lien direct) Posted by Jan Keller, Technical Entertainment Manager, Bug Hunters Are you ready to put your hacking skills to the test? It's Google CTF time!The competition kicks off on July 1 2022 6:00 PM UTC and runs through July 3 2022 6:00 PM UTC. Registration is now open at http://goo.gle/ctf.In true old Google CTF fashion, the top 8 teams will qualify for our Hackceler8 speedrunning meets CTFs competition. The prize pool stands similar to previous years at more than $40,000.We can't wait to see whether PPP will be able to defend their crown. For those of you looking to satisfy your late-night hacking hunger: past year's challenges, including Hackceler8 2021 matches, are open-sourced here. On top of that there are hours of Hackceler8 2020 videos to watch!If you are just starting out in this space, last year's Beginner's Quest is a great resource to get started. For later in the year, we have something mysterious planned - stay tuned to find out more!Whether you're a seasoned CTF player or just curious about cyber security and ethical hacking, we want you to join us. Sign up to expand your skill set, meet new friends in the security community, and even watch the pros in action. For the latest announcements, see g.co/ctf, subscribe to our mailing list, or follow us on @GoogleVRP. Interested in bug hunting for Google? Check out bughunters.google.com. See you there!
GoogleSec.webp 2022-06-14 12:00:00 SBOM in Action: finding vulnerabilities with a Software Bill of Materials (lien direct) Posted by Brandon Lum and Oliver Chang, Google Open Source Security TeamThe past year has seen an industry-wide effort to embrace Software Bills of Materials (SBOMs)-a list of all the components, libraries, and modules that are required to build a piece of software. In the wake of the 2021 Executive Order on Cybersecurity, these ingredient labels for software became popular as a way to understand what's in the software we all consume. The guiding idea is that it's impossible to judge the risks of particular software without knowing all of its components-including those produced by others. This increased interest in SBOMs saw another boost after the National Institute of Standards and Technology (NIST) released its Secure Software Development Framework, which requires SBOM information to be available for software. But now that the industry is making progress on methods to generate and share SBOMs, what do we do with them?Generating an SBOM is only one half of the story. Once an SBOM is available for a given piece of software, it needs to be mapped onto a list of known vulnerabilities to know which components could pose a threat. By connecting these two sources of information, consumers will know not just what's in what's in their software, but also its risks and whether they need to remediate any issues.In this blog post, we demonstrate the process of taking an SBOM from a large and critical project-Kubernetes-and using an open source tool to identify the vulnerabilities it contains. Our example's success shows that we don't need to wait for SBOM generation to reach full maturity before we begin mapping SBOMs to common vulnerability databases. With just a few updates from SBOM creators to address current limitations in connecting the two sources of data, this process is poised to become easily within reach of the average software consumer. OSV: Connecting SBOMs to vulnerabilitiesThe following example uses Kubernetes, a major project that makes its SBOM available using the Software Package Data Exchange (SPDX) format-an international open standard (ISO) for communicating SBOM information. The same idea should apply to any project that makes its SBOM available, and for projects that don't, you can generate your own SBOM using the same bom tool Kubernetes created.We have chosen to map the SBOM to the Open Source Vulnerabilities (OSV) database, which describes vulnerabilities in a format that was specifically designed to map to open source package versions or commit hashes. The OSV database excels here as it provides a standardized format and aggregates information across multiple ecosystems (e.g., Python, Golang, Rust) and databases (e.g., Github Advisory Database (GHSA), Global Security Database (GSD)).To connect the SBOM to the database, we'll use the SPDX spdx-to-osv tool. This open source tool takes in an SPDX SBOM document, queries the OSV database of vulnerabilities, and returns an enumeration of vulnerabilities present in the software's declared components.Example: Kubernetes' SBOMThe first step is to download Kubernetes' SBOM, which is publicly available and contains information on the project, dependencies, versions, and Tool Vulnerability Uber
GoogleSec.webp 2022-06-03 15:04:57 Announcing the winners of the 2021 GCP VRP Prize (lien direct) Posted by Harshvardhan Sharma, Information Security Engineer, Google 2021 was another record-breaking year for our Vulnerability Rewards Program (VRP). We paid a total of $8.7 million in rewards, our highest amount yet. 2021 saw some amazing work from the security research community. It is worth noting that a significant portion of the reports we received were for findings in Google Cloud Platform (GCP) products. It is heartening to see an increasing number of talented researchers getting involved in cloud security.We first announced the GCP VRP Prize in 2019 to encourage security researchers to focus on the security of GCP, in turn helping us make GCP more secure for our users, customers, and the internet at large. Even 3 years into the program, the submissions we are getting never cease to amaze us. After careful evaluation of the submissions, we are excited to announce the 2021 winners:First Prize, $133,337: Sebastian Lutz for the report and write-up Bypassing Identity-Aware Proxy. Sebastian's excellent write-up outlines how he found a bug in Identity-Aware Proxy (IAP) which an attacker could have exploited to gain access to a user's IAP-protected resources by making them visit an attacker-controlled URL and stealing their IAP auth token.Second Prize, $73,331: Imre Rad for the report and write-up GCE VM takeover via DHCP flood. The flaw described in the write-up would have allowed an attacker to gain access to a Google Compute Engine VM by sending malicious DHCP packets to the VM and impersonating the GCE metadata server.Third Prize, $73,331: Mike Brancato for the report and write-up Remote Code Execution in Google Cloud Dataflow. Mike's write-up describes how he discovered that Dataflow nodes were exposing an unauthenticated Java JMX port and how an attacker could have exploited this to run arbitrary commands on the VM under some configurations.Fourth Prize, $31,337: Imre Rad for the write-up The Speckle Umbrella story - part 2 which details multiple vulnerabilities that Imre found in Cloud SQL.(Remember, you can make multiple submissions for the GCP VRP Prize and be eligible for more than one prize!)Fifth Prize, $1,001: Anthony Weems for the report and write-up Remote code execution in Managed Anthos Service Mesh control plane. Anthony found a bug in Managed Anthos Service Mesh and came up with a clever exploit to execute arbitrary commands authenticated as a Google-managed per-project service account.Sixth Prize, $1,000: Ademar Nowasky Junior for the report and write-up Command Injection in Google Cloud Shell. Ademar found a way to bypass some of the validation checks done by Cloud Shell. This would have allowed an attacker to run arbitrary commands in a user's Cloud Shell session by making them visit a maliciously crafted link.Congratulations to all the winners!Here's a video that with more details about each of the winning submissions.New Details About 2022 GCP VRPWe will pay out a total of $313,337 to the top seven submissions in the 2022 edition of the GCP VRP Prize. Individual prize amou Vulnerability
GoogleSec.webp 2022-05-26 13:53:04 Retrofitting Temporal Memory Safety on C++ (lien direct) Posted by Anton Bikineev, Michael Lippautz and Hannes Payer, Chrome security teamMemory safety in Chrome is an ever-ongoing effort to protect our users. We are constantly experimenting with different technologies to stay ahead of malicious actors. In this spirit, this post is about our journey of using heap scanning technologies to improve memory safety of C++.Let's start at the beginning though. Throughout the lifetime of an application its state is generally represented in memory. Temporal memory safety refers to the problem of guaranteeing that memory is always accessed with the most up to date information of its structure, its type. C++ unfortunately does not provide such guarantees. While there is appetite for different languages than C++ with stronger memory safety guarantees, large codebases such as Chromium will use C++ for the foreseeable future.auto* foo = new Foo();delete foo Tool Guideline
GoogleSec.webp 2022-05-18 09:03:33 Privileged pod escalations in Kubernetes and GKE (lien direct) Posted by GKE and Anthos Platform Security Teams At the KubeCon EU 2022 conference in Valencia, security researchers from Palo Alto Networks presented research findings on “trampoline pods”-pods with an elevated set of privileges required to do their job, but that could conceivably be used as a jumping off point to gain escalated privileges.The research mentions GKE, including how developers should look at the privileged pod problem today, what the GKE team is doing to minimize the use of privileged pods, and actions GKE users can take to protect their clusters.Privileged pods within the context of GKE securityWhile privileged pods can pose a security issue, it's important to look at them within the overall context of GKE security. To use a privileged pod as a “trampoline” in GKE, there is a major prerequisite – the attacker has to first execute a successful application compromise and container breakout attack. Because the use of privileged pods in an attack requires a first step such as a container breakout to be effective, let's look at two areas:features of GKE you can use to reduce the likelihood of a container breakoutsteps the GKE team is taking to minimize the use of privileged pods and the privileges needed in them.Reducing container breakoutsThere are a number of features in GKE along with some best practices that you can use to reduce the likelihood of a container breakout:Use GKE Sandbox to strengthen the container security boundary. Over the last few months, GKE Sandbox has protected containers running it against several newly discovered Linux kernel breakout CVEs.Adopt GKE Autopilot for new clusters. Autopilot clusters have default policies that prevent host access through mechanisms like host path volumes and host network. The container runtime default seccomp profile is also enabled by default on Autopilot which has prevented several breakouts.Subscribe to GKE Release Channels and use autoupgrade to keep nodes patched automatically against kernel vulnerabilities.Run Google's Container Optimized OS, the minimal and hardened container optimized OS that makes much of the disk read-only.Incorporate binary authorization into your SDLC to require that containers admitted into the cluster are from trusted build systems and up-to-date on patching.Use Secure Command Center's Container Threat Detection or supported third-party tools to detect the most common runtime attacks.More information can be found in the GKE Hardening Guide.How GKE is reducing the use of privileged pod Tool Threat Uber
GoogleSec.webp 2022-05-11 15:49:52 I/O 2022: Android 13 security and privacy (and more!) (lien direct) Posted by Eugene Liderman and Sara N-Marandi, Android Security and Privacy TeamEvery year at I/O we share the latest on privacy and security features on Android. But we know some users like to go a level deeper in understanding how we're making the latest release safer, and more private, while continuing to offer a seamless experience. So let's dig into the tools we're building to better secure your data, enhance your privacy and increase trust in the apps and experiences on your devices. Low latency, frictionless securityRegardless of whether a smartphone is used for consumer or enterprise purposes, attestation is a key underpinning to ensure the integrity of the device and apps running on the device. Fundamentally, key attestation lets a developer bind a secret or designate data to a device. This is a strong assertion: "same user, same device" as long as the key is available, a cryptographic assertion of integrity can be made. With Android 13 we have migrated to a new model for the provisioning of attestation keys to Android devices which is known as Remote Key Provisioning (RKP). This new approach will strengthen device security by eliminating factory provisioning errors and providing key vulnerability recovery by moving to an architecture where Google takes more responsibility in the certificate management lifecycle for these attestation keys. You can learn more about RKP here. We're also making even more modules updatable directly through Google Play System Updates so we can automatically upgrade more system components and fix bugs, seamlessly, without you having to worry about it. We now have more than 30 components in Android that can be automatically updated through Google Play, including new modules in Android 13 for Bluetooth and ultra-wideband (UWB). Last year we talked about how the majority of vulnerabilities in major operating systems are caused by undefined behavior in programming languages like C/C++. Rust is an alternative language that provides the efficiency and flexibility required in advanced systems programming (OS, networking) but Rust comes with the added boost of memory safety. We are happy to report that Rust is being adopted in security critical parts of Android, such as our key management components and networking stacks. Hardening the platform doesn't just stop with continual improvements with memory safety and expansion of anti-exploitation techniques. It also includes hardening our API surfaces to provide a more secure experience to our end users. In Android 13 we implemented numerous enhancements to help mitigate potential vulnerabilities that app developers may inadvertently introduce. This includes making runtime receivers safer by allowing developers to specify whether a particular broadcast receiver in their app s Spam Vulnerability
GoogleSec.webp 2022-05-11 15:36:41 Taking on the Next Generation of Phishing Scams (lien direct) Posted by Daniel Margolis, Software Engineer, Google Account Security Team Every year, security technologies improve: browsers get better, encryption becomes ubiquitous on the Web, authentication becomes stronger. But phishing persistently remains a threat (as shown by a recent phishing attack on the U.S. Department of Labor) because users retain the ability to log into their online accounts, often with a simple password, from anywhere in the world. It's why today at I/O we announced new ways we're reducing the risks of phishing by: scaling phishing protections to Google Docs, Sheets and Slides, continuing to auto enroll people in 2-Step Verification and more. This blog will deep dive into the method of phishing and how it has evolved today. As phishing adoption has grown, multi-factor authentication has become a particular focus for attackers. In some cases, attackers phish SMS codes directly, by following a legitimate "one-time passcode" (triggered by the attacker trying to log into the victim's account) with a spoofed message asking the victim to "reply back with the code you just received.” Left: legitimate Google SMS verification. Right: spoofed message asking victim to share verification code.In other cases, attackers have leveraged more sophisticated dynamic phishing pages to conduct relay attacks. In these attacks, a user thinks they're logging into the intended site, just as in a standard phishing attack. But instead of deploying a simple static phishing page that saves the victim's email and password when the victim tries to login, the phisher has deployed a web service that logs into the actual website at the same time the user is falling for the phishing page.The simplest approach is an almost off-the-shelf "reverse proxy" which acts as a "person in the middle", forwarding the victim's inputs to the legitimate page and sending the response from the legitimate page back to the victim's browser. Threat
GoogleSec.webp 2022-04-28 12:21:24 The Package Analysis Project: Scalable detection of malicious open source packages (lien direct) Posted by Caleb Brown, Open Source Security Team Despite open source software's essential role in all software built today, it's far too easy for bad actors to circulate malicious packages that attack the systems and users running that software. Unlike mobile app stores that can scan for and reject malicious contributions, package repositories have limited resources to review the thousands of daily updates and must maintain an open model where anyone can freely contribute. As a result, malicious packages like ua-parser-js, and node-ipc are regularly uploaded to popular repositories despite their best efforts, with sometimes devastating consequences for users. Google, a member of the Open Source Security Foundation (OpenSSF), is proud to support the OpenSSF's Package Analysis project, which is a welcome step toward helping secure the open source packages we all depend on. The Package Analysis program performs dynamic analysis of all packages uploaded to popular open source repositories and catalogs the results in a BigQuery table. By detecting malicious activities and alerting consumers to suspicious behavior before they select packages, this program contributes to a more secure software supply chain and greater trust in open source software. The program also gives insight into the types of malicious packages that are most common at any given time, which can guide decisions about how to better protect the ecosystem. To better understand how the Package Analysis program is contributing to supply chain security, we analyzed the nearly 200 malicious packages it captured over a one-month period. Here's what we discovered: ResultsAll signals collected are published in our BigQuery table. Using simple queries on this table, we found around 200 meaningful results from the packages uploaded to NPM and PyPI in a period of just over a month. Here are some notable examples, with more available in the repository.PyPI: discordcmdThis Python package will attack the desktop client for Discord on Windows. It was found by spotting the unusual requests to raw.githubusercontent.com, Discord API, and ipinfo.io.First, it downloaded a backdoor from GitHub and installed it into the Discord electron client.Next, it looked through various local databases for the user's Discord token.
GoogleSec.webp 2022-04-27 12:01:06 How we fought bad apps and developers in 2021 (lien direct) Posted by Steve Kafka and Khawaja Shams, Android Security and Privacy Team Providing a safe experience to billions of users continues to be one of the highest priorities for Google Play. Last year we introduced multiple privacy focused features, enhanced our protections against bad apps and developers, and improved SDK data safety. In addition, Google Play Protect continues to scan billions of installed apps each day across billions of devices to keep people safe from malware and unwanted software. We continue to enhance our machine learning systems and review processes, and in 2021 we blocked 1.2 million policy violating apps from being published on Google Play, preventing billions of harmful installations. We also continued in our efforts to combat malicious and spammy developers, banning 190k bad accounts in 2021. In addition, we have closed around 500k developer accounts that are inactive or abandoned. In May we announced our new Data safety section for Google Play where developers will be required to give users deeper insight into the privacy and security practices of the apps they download, and provide transparency into the data the app may collect and why. The Data safety section launched this week, and developers are required to complete this section for their apps by July 20th. We've also invested in making life easier for our developers. We added the Policy and Programs section to Google Play Console to help developers manage all their app compliance issues in one central location. This includes the ability to appeal a decision and track its status from this page. In addition, we continued to partner with SDK developers to improve app safety, limit how user data is shared, and improve lines of communication with app developers. SDKs provide functionality for app developers, but it can sometimes be tricky to know when an SDK is safe to use. Last year, we engaged with SDK developers to build a safer Android and Google Play ecosystem. As a result of this work, SDK developers have improved the safety of SDKs used by hundreds of thousands of apps impacting billions of users. This remains a huge investment area for our team, and we will continue in our efforts to make SDKs safer across the ecosystem. Limiting accessThe best way to ensure users' data stays safe is to limit access to it in the first place. As a result of new platform protections and policies, developer collaboration and education, 98% of apps migrating to Android 11 or higher have reduced their access to sensitive APIs and user data. We've also significantly reduced the unnecessary, dangerous, or disallowed use of Accessibility APIs in apps migrating to Android 12, while preserving the functionality of legitimate use cases. We also continued in our commitment to make Android a great place for families. Last year we disallowed the collection of Advertising ID (AAID) and other device identifiers from all users in apps solely targeting children, and gave all users the ability to delete their Advertising ID entirely, regardless of the app. Pixel enhancementsFor Pi Malware
GoogleSec.webp 2022-04-14 13:34:54 How to SLSA Part 3 - Putting it all together (lien direct) Posted by Tom Hennen, software engineer, BCID & GOSST In our last two posts (1,2) we introduced a fictional example of Squirrel, Oppy, and Acme learning to SLSA and covered the basics and details of how they'd use SLSA for their organizations. Today we'll close out the series by exploring how each organization pulls together the various solutions into a heterogeneous supply chain. As a reminder, Acme is trying to produce a container image that contains three artifacts:The Squirrel package 'foo'The Oppy package 'baz'A custom executable, 'bar', written by Acme employees.The process starts with 'foo' package authors triggering a build using GitHub Actions. This results in a new version of 'foo' (an artifact with hash 'abc') being pushed to the Squirrel repo along with its SLSA provenance (signed by Fulcio) and source attestation. When Squirrel gets this push request it verifies the artifact against the specific policy for 'foo' which checks that it was built by GitHub Actions from the expected source repository. After the artifact passes the policy check a VSA is created and the new package, its original SLSA provenance, and the VSA are made public in the Squirrel repo, available to all users of package 'foo'.Next the maintainers of the Oppy 'baz' package trigger a new build using the Oppy Autobuilder. This results in a new version of 'baz' (an artifact with hash 'def') being pushed to a public Oppy repo with the SLSA provenance (signed by their org-specific keys) published to Rekor. When the repo gets the push request it makes the artifact available to the public. The repo does not perform any verification at this time.An Acme employee then makes a change to their Dockerfile, sending it for review by their co-worker, who approves the change and merges the PR. This then causes the Acme builder to trigger a build. During this build:bar is compiled from source code stored in the same source repo as the Dockerfile.acorn install downloads 'foo' from the Squirrel repo, verifying the VSA, and recording the use of acorn://foo@abc and its VSA in the build.acme_oppy_get install (a custom script made by Acme) downloads the latest version of the Oppy 'baz' package and queries its SLSA provenance and other attestations from Rekor. It then performs a full verification checking that it was built by 'https://oppy.example/slsa/builder/v1' and the publicized key. Once verification is complete it records the use of oppy://baz@def and the associated attestations in the build.The build process assembles the SLSA provenance for the container by:Recording the Acme git repo the bar source and Dockerfile came from, into materials.Copying the reported dependencies of acorn://foo@abc and oppy://baz@def into materials and adding their attestations to the output in-toto bundle.Recording the CI/CD entrypoint as the invocation.Creating a signed DSSE with the SLSA provenance and adding it to the output in-toto bundle.Once the container is ready for release the Acme verifier checks the SLSA provenance (and other data in the in-toto bundle) using the policy from their own policy repo and issu
GoogleSec.webp 2022-04-13 12:35:03 How to SLSA Part 2 - The Details (lien direct) Posted by Tom  Hennen, software engineer, BCID & GOSST In our last post we introduced a fictional example of Squirrel, Oppy, and Acme learning to use SLSA and covered the basics of what their implementations might look like. Today we'll cover the details: where to store attestations and policies, what policies should check, and how to handle key distribution and trust.Attestation storageAttestations play a large role in SLSA and it's essential that consumers of artifacts know where to find the attestations for those artifacts.Co-located in repoAttestations could be colocated in the repository that hosts the artifact. This is how Squirrel plans to store attestations for packages. They even want to add support to the Squirrel CLI (e.g. acorn get-attestations foo@1.2.3).Acme really likes this approach because the attestations are always available and it doesn't introduce any new dependencies.RekorMeanwhile, Oppy plans to store attestations in Rekor. They like being able to direct users to an existing public instance while not having to maintain any new infrastructure themselves, and the in-depth defense the transparency log provides against tampering with the attestations.Though the latency of querying attestations from Rekor is likely too high for doing verification at time of use, Oppy isn't too concerned since they expect users to query Rekor at install time.HybridA hybrid model is also available where the publisher stores the attestations in Rekor as well as co-located with the artifact in the repo-along with Rekor's inclusion proof. This provides confidence the data was added to Rekor while providing the benefits of co-locating attestations in the repository.Policy content'Policy' refers to the rules used to determine if an artifact should be allowed for a use case.Policies often use the package name as a proxy for determining the use case. An example being, if you want to find the policy to apply you could look up the policy using the package name of the artifact you're evaluating.Policy specifics may vary based on ease of use, availability of data, risk tolerance and more. Full verification needs more from policies than delegated verification does.Default policyDefault policies allow admission decisions without the need to create specific policies for each package. A default policy is a way of saying “anything that doesn't have a more specific policy must comply with this policy”.Squirrel plans to eventually implement a default policy of “any package without a more specific policy will be accepted as long as it meets SLSA 3”, but they recognize that most packages don't support this yet. Until they achieve critical mass they'll have a default SLSA 0 policy (all artifacts are accepted).While Oppy is leaving verification to their users, they'll suggest a default policy of “any package built by 'https://oppy.example/slsa/builder/v1'”.Specific policySquirrel also plans to allow users to create policies for specific packages. For example, this policy requires that package 'foo' must have been built by GitHub Actions, from github.com/foo/acorn-foo, and be SLSA 4. Tool
GoogleSec.webp 2022-04-13 12:30:56 How to SLSA Part 1 - The Basics (lien direct) Posted by Tom Hennen, Software Engineer, BCID & GOSST One of the great benefits of SLSA (Supply-chain Levels for Software Artifacts) is its flexibility. As an open source framework designed to improve the integrity of software packages and infrastructure, it is as applicable to small open source projects as to enterprise organizations. But with this flexibility can come a bewildering array of options for beginners-much like salsa dancing, someone just starting out might be left on the dance floor wondering how and where to jump in.Though it's tempting to try to establish a single standard for how to use SLSA, it's not possible: SLSA is not a line dance where everyone does the same moves, at the same time, to the same song. It's a varied system with different styles, moves, and flourishes. The open source community, organizations, and consumers may all implement SLSA differently, but they can still work with each other.In this three-part series, we'll explore how three fictional organizations would apply SLSA to meet their different needs. In doing so, we will answer some of the main questions that newcomers to SLSA have:Part 1: The basicsHow and when do you verify a package with SLSA?How to handle artifacts without provenance?Part 2: The detailsWhere is the provenance stored?Where is the appropriate policy stored and who should verify it?What should the policies check?How do you establish trust & distribute keys?Part 3: Putting it all togetherWhat does a secure, heterogeneous supply chain look like?The SituationOur fictional example involves three organizations that want to use SLSA:Squirrel: a package manager with a large number of developers and usersOppy: an open source operating system with an enterprise distributionAcme: a mid sized enterprise. Squirrel wants to make SLSA as easy for their users as possible, even if that means abstracting some details away. Meanwhile, Oppy doesn't want to abstract anything away from their users under the philosophy that they should explicitly understand exactly what they're consuming.Acme is trying to produce a container image that contains three artifacts:The Squirrel package 'foo'The Oppy package 'baz'A custom executable, 'bar', written by Acme employeesThis series demonstrates one approach to using SLSA that lets Acme verify the Squirrel and Oppy packages 'foo' and 'baz' and its customers verify the container image. Though not every suggested solution is perfect, the solutions described can be a starting point for discussion and a foundation for new solutions.BasicsIn order to SLSA, Squirrel, Oppy, and Acme will all need SLSA capable build services. Squirrel wants to give their maintainers wide latitude to pick a builder service of their own. To support this, Squirrel will qualify some build services at specific SLSA levels (meaning they can produce artifacts up to that level). To start, Squirrel plans to qualify GitHub Actions using an approach like this, and hopes it can achieve SLSA 4 (pending the result of an independent audit). They're also willing to qualify other build services as needed. Oppy on the other hand, doesn't need to support arbitrary build services. They plan to have everyone use their Autobuilder network which they hope to qualify at SLSA 4 (they'll conduct the audit/certification themselves). Finally, Acme plans to use Google Cloud Build which they'll self-certify at SLSA 4 (pending the result of a Google-conducted a
GoogleSec.webp 2022-04-12 17:47:40 Vulnerability Reward Program: 2021 Year in Review (lien direct) Posted by Sarah Jacobus, Vulnerability Rewards Team Last year was another record setter for our Vulnerability Reward Programs (VRPs). Throughout 2021, we partnered with the security researcher community to identify and fix thousands of  vulnerabilities – helping keep our users and the internet safe. Thanks to these incredible researchers, Vulnerability Reward Programs across Google continued to grow, and we are excited to report that in 2021 we awarded a record breaking $8,700,000 in vulnerability rewards – with researchers donating over $300,000 of their rewards to a charity of their choice. We also launched bughunters.google.com in 2021, a public researcher portal dedicated to keeping Google products and the internet safe and secure. This new platform brings all of our VRPs (Google, Android, Abuse, Chrome, and Google Play) closer together and provides a single intake form, making security bug submission easier than ever. We're excited about everything the new Bug Hunters portal has to offer, including:More opportunities for interaction and a bit of healthy competition through gamification, per-country leaderboards, awards/badges for certain bugs, and more!A more functional and aesthetically pleasing leaderboard. We know a lot of you are using your achievements in our VRPs to find jobs (we're hiring!) and we hope this acts as a useful resource.A stronger emphasis on learning: bug hunters can improve their skills through the content available in our new Vulnerability Guideline
GoogleSec.webp 2022-04-07 11:33:30 Improving software supply chain security with tamper-proof builds (lien direct) Posted by Asra Ali and Laurent Simon, Google Open Source Security Team (GOSST)Many of the recent high-profile software attacks that have alarmed open-source users globally were consequences of supply chain integrity vulnerabilities: attackers gained control of a build server to use malicious source files, inject malicious artifacts into a compromised build platform, and bypass trusted builders to upload malicious artifacts. Each of these attacks could have been prevented if there were a way to detect that the delivered artifacts diverged from the expected origin of the software. But until now, generating verifiable information that described where, when, and how software artifacts were produced (information known as provenance) was difficult. This information allows users to trace artifacts verifiably back to the source and develop risk-based policies around what they consume. Currently, provenance generation is not widely supported, and solutions that do exist may require migrating build processes to services like Tekton Chains.This blog post describes a new method of generating non-forgeable provenance using GitHub Actions workflows for isolation and Sigstore's signing tools for authenticity. Using this approach, projects building on GitHub runners can achieve SLSA 3 (the third of four progressive SLSA “levels”), which affirms to consumers that your artifacts are authentic and trustworthy. ProvenanceSLSA ("Supply-chain Levels for Software Artifacts”) is a framework to help improve the integrity of your project throughout its development cycle, allowing consumers to trace the final piece of software you release all the way back to the source. Achieving a high SLSA level helps to improve the trust that your artifacts are what you say they are.This blog post focuses on build provenance, which gives users important information about the build: who performed the release process? Was the build artifact protected against malicious tampering? Source provenance describes how the source code was protected, which we'll cover in future blog posts, so stay tuned.Go prototype to generate non-forgeable build provenanceTo create tamperless evidence of the build and allow consumer verification, you need to:Isolate the provenance generation from the build process;Isolate against maintainers interfering in the workflow;Provide a mechanism to identify the builder during provenance verification.The full isolation described in the first two points allows consumers to trust that the provenance was faithfully recorded; entities that provide this guarantee are called trusted builders.Our Go prototype solves all three challenges. It also includes running the build inside the trusted builder, which provides a strong guarantee that the build achieves SLSA 3's ephemeral and isolated requirement.How does it work?The following steps create the trusted builder that is necessar Solardwinds
GoogleSec.webp 2022-04-05 10:48:50 Find and $eek! Increased rewards for Google Nest & Fitbit devices (lien direct) Posted by Medha Jain, Program Manager, Devices & Services Security At Google, we constantly invest in security research to raise the bar for our devices, keeping our users safe and building their trust in our products. In 2021, we published Google Nest security commitments, in which we committed to engage with the research community to examine our products and services and report vulnerabilities.We are now looking to deepen this relationship and accelerate the path toward building more secure devices. Starting today, we will introduce a new vulnerability rewards structure for submissions impacting smart home (Google Nest) and wearables (Fitbit) devices through our Bug Hunters platform. Bonus!We are paying higher rewards retroactively for eligible Google Nest and Fitbit devices reports submitted in 2021. And, starting today, for the next six months, will double the reward amount for all new eligible reports applicable to Google Nest & Fitbit devices in scope.We will continue to take reports on our web applications, services, and mobile apps at their existing reward levels. Please keep those coming! An enhanced rewards programBuilding on our previous programs to improve devices' embedded security posture, we're bringing all our first-party devices under a single program, starting with Google Nest, Fitbit, and Pixel.This program extends the Android Security Reward Program, making it easier for researchers to submit a vulnerability in first-party devices and improving consistency across our severity assignments. Refer to the Android and Google Devices Security Reward Program for more details.What interests us?We encourage researchers to report firmware, system software, and hardware vulnerabilities. Our wide diversity of platforms provides researchers with a smorgasbord of environments to explore. What's next?We will be at the Hardwear.io conference this year! The VRP team is looking forward to meeting our security peers in person. We'll be talking about the architecture of a couple of our devices, hoping to give security researchers a head start in finding vulnerabilities. We'll have plenty of swag, too!We will continue to enhance the researchers' experience and participation. We intend to add training documentations and target areas that interest us as we grow the program.A huge thanks to Sarah Jacobus, Adam Bacchus, Ankur Chakraborty, Eduardo' Vela" , Jay Cox, and Nic Watson. Vulnerability
GoogleSec.webp 2022-03-23 13:03:26 What\'s up with in-the-wild exploits? Plus, what we\'re doing about it. (lien direct) Posted by Adrian Taylor, Chrome Security TeamIf you are a regular reader of our Chrome release blog, you may have noticed that phrases like 'exploit for CVE-1234-567 exists in the wild' have been appearing more often recently. In this post we'll explore why there seems to be such an increase in exploits, and clarify some misconceptions in the process. We'll then share how Chrome is continuing to make it harder for attackers to achieve their goals. How things work today While the increase may initially seem concerning, it's important to understand the reason behind this trend. If it's because there are many more exploits in the wild, it could point to a worrying trend. On the other hand, if we're simply gaining more visibility into exploitation by attackers, it's actually a good thing! It's good because it means we can respond by providing bug fixes to our users faster, and we can learn more about how real attackers operate. So, which is it? It's likely a little of both. Our colleagues at Project Zero publicly track all known in-the-wild “zero day” bugs. Here's what they've reported for browsers: First, we don't believe there was no exploitation of Chromium based browsers between 2015 and 2018. We recognize that we don't have full view into active exploitation, and just because we didn't detect any zero-days during those years, doesn't mean exploitation didn't happen. Available exploitation data suffers from sampling bias. Teams like Google's Threat Analysis Group are also becoming increasingly sophisticated in their efforts to protect users by discovering zero-days and in-the-wild attacks. A good example is a bug in our Portals feature that we fixed last fall. This bug was discovered by a team member in Switzerland and reported to Chrome through our bug tracker. While Chrome normally keeps each web page locked away in a box called the “renderer sandbox,” this bug allowed the code to break out, potentially allowing attackers to steal information. Working across multiple time zones and teams, it took the team three days to come up with a fix and roll it out, as detailed in our video on the process: Why so many exploits? There are a number of factors at play, from changes in vendor and attacker behavior, to changes in the software itself. Here are four in particular that we've been discussing and exploring as a team. First, we believe we're seeing more bugs thanks to vendor transparency. Historically, many browser makers didn't announce that a bug was being exploited in the wild, even if they knew it was happening. Today, most major browser makers have increased transparency via publishing details in release communications, and that may account for more publicly tracked “in the wild” exploitation. These efforts have been spearheaded by bo Vulnerability
GoogleSec.webp 2022-02-23 14:07:46 Mitigating kernel risks on 32-bit ARM (lien direct) Posted by Ard Biesheuvel, Google Open Source Security Team Linux kernel support for the 32-bit ARM architecture was contributed in the late 90s, when there was little corporate involvement in Linux development, and most contributors were students or hobbyists, tinkering with development boards, often without much in the way of documentation.Now 20+ years later, 32-bit ARM's maintainer has downgraded its support level to 'odd fixes,' while remaining active as a kernel contributor. This is a common pattern for aging and obsolete architectures: corporate funding for Linux kernel development has tremendously increased the pace of development, but only for architectures with a high return on investment. As a result, the 32-bit ARM port of Linux is essentially in maintenance-only mode, and lacks core Linux advancements such as THREAD_INFO_IN_TASK or VMAP_STACK, which protect against stack overflow attacks.The lack of developer attention does not imply that the 32-bit ARM port has ceased to make economic sense, though. Instead, it has evolved from being one of the spearheads of Linux innovation to a stable and mature platform, and while funding its upstream development may not make sense in the long term, deploying 32-bit ARM into the field today most certainly still makes economic sense when margins are razor thin and BOM costs need to be kept to an absolute minimum. This is why 32-bit ARM is still widely used in embedded systems like set-top boxes and wireless routers.Running 32-bit Linux on 64-bit ARM systemsIronically, at these low price points, the DRAM is actually the dominant component in terms of BOM cost, and many of these 32-bit ARM systems incorporate a cheap ARMv8 SoC that happens to be capable of running in 64-bit mode as well. The reason for running 32-bit applications nonetheless is that these generally use less of the expensive DRAM, and can be deployed directly without the need to recompile the binaries. As 32-bit applications don't need a 64-bit kernel (which itself uses more memory due to its internal use of 64-bit pointers), the product ships with a 32-bit kernel instead.If you're choosing to use a 32-bit kernel for its smaller memory footprint, it's not without risks. You'll likely experience performance issues, unpatched vulnerabilities, and unexpected misbehaviors such as:32-bit kernels generally cannot manage more than 1 GiB of physical memory without resorting to HIGHMEM bouncing, and cannot provide a full virtual address space of 4 GiB to user space, as 64-bit kernels can.Side channels or other flaws caused by silicon errata may exist that haven't been mitigated in 32-bit kernels. For example, the hardening against Spectre and Meltdown vulnerabilities were only done for ARMv7 32-bit only CPUs, and many ARMv8 cores running in 32-bit mode may still be vulnerable (only Cortex-A73 and A75 are handled specifically). And in general, silicon flaws in 64-bit parts that affect the 32-bit kernel are less likely to be found or documented, simply because the silicon validation teams don't prioritize them.The 32-bit ARM kernel does not implement the elaborate alternatives patching framework that is used by other architectures to implement handling of silicon errata, which are particular to certain revisions of certain CPUs. Instead, on 32-bit multiplatform ker Patching
GoogleSec.webp 2022-02-14 12:07:20 🌹 Roses are red, Violets are blue 💙 Giving leets 🧑‍💻 more sweets 🍭 All of 2022! (lien direct) Posted by Eduardo Vela, Vulnerability Matchmaker Until December 31 2022 we will pay 20,000 to 91,337 USD for exploits of vulnerabilities in the Linux Kernel, Kubernetes, GKE or kCTF that are exploitable on our test lab.We launched an expansion of kCTF VRP on November 1, 2021 in which we paid 31,337 to 50,337 USD to those that are able to compromise our kCTF cluster and obtain a flag. We increased our rewards because we recognized that in order to attract the attention of the community we needed to match our rewards to their expectations. We consider the expansion to have been a success, and because of that we would like to extend it even further to at least until the end of the year (2022).During the last three months, we received 9 submissions and paid over 175,000 USD so far. The submissions included five 0days and two 1days. Three of these are already fixed and are public: CVE-2021-4154, CVE-2021-22600 (patch) and CVE-2022-0185 (writeup). These three bugs were first found by Syzkaller, and two of them had already been fixed on the mainline and stable versions of the Linux Kernel at the time they were reported to us.Based on our experience these last 3 months, we made a few improvements to the submission process:Reporting a 0day will not require including a flag at first. We heard some concerns from participants that exploiting a 0day in the shared cluster could leak it to other participants. As such, we will only ask for the exploit checksum (but you still have to exploit the bug and submit the flag within a week after the patch is merged on mainline). Please make sure that your exploit works on COS with minimal modifications (test it on your own kCTF cluster), as some common exploit primitives (like eBPF and userfaultfd) might not be available.Reporting a 1day will require including a link to the patch. We will automatically publish the patches of all submissions if the flag is valid. We also encourage you all to include a link to a Syzkaller dashboard report if applicable in order to help reduce duplicate submissions and so you can see which bugs were exploited already.You will be able to submit the exploit in the same form you submit the flag. If you had submitted an exploit checksum for a 0day, please make sure that you include the original exploit as well as the final exploit and make sure to submit it within a week after the patch is merged on mainline. The original exploit shouldn't require major modifications to work. Note that we need to be able to understand your exploit, so please add comments to explain what it is doing.We are now running two clusters, one on the REGULAR release channel and another one on the RAPID release channel. This should provide more flexibility whenever a vulnerability is only exploitable on modern versions of the Linux Kernel or Kubernetes.We are also changing the reward structure Uber
GoogleSec.webp 2022-01-19 10:00:00 Reducing Security Risks in Open Source Software at Scale: Scorecards Launches V4 (lien direct) Posted by Laurent Simon and Azeem Shaikh, Google Open Source Security Team (GOSST) Since our July announcement of Scorecards V2, the Scorecards project-an automated security tool to flag risky supply chain practices in open source projects-has grown steadily to over 40 unique contributors and 18 implemented security checks. Today we are proud to announce the V4 release of Scorecards, with larger scaling, a new security check, and a new Scorecards GitHub Action for easier security automation.The Scorecards Action is released in partnership with GitHub and is available from GitHub's Marketplace. The Action makes using Scorecards easier than ever: it runs automatically on repository changes to alert developers about risky supply-chain practices. Maintainers can view the alerts on GitHub's code scanning dashboard, which is available for free to public repositories on GitHub.com and via GitHub Advanced Security for private repositories. Additionally, we have scaled our weekly Scorecards scans to over one million GitHub repositories, and have partnered with the Open Source Insights website for easy user access to the data. For more details about the release, including the new Dangerous-Workflow security check, visit the OpenSSF's official blog post here. Tool
GoogleSec.webp 2021-12-21 10:54:50 Understanding the Impact of Apache Log4j Vulnerability (lien direct) Posted by James Wetter and Nicky Ringland, Open Source Insights Team Editors Note:The below numbers were calculated based on both log4j-core and log4j-api, as both were listed on the CVE. Since then, the CVE has been updated with the clarification that only log4j-core is affected.The ecosystem impact numbers for just log4j-core, as of 19th December are over 17,000 packages affected, which is roughly 4% of the ecosystem. 25% of affected packages have fixed versions available.The linked list, which continues to be updated, only includes packages which depend on log4j-core.##More than 35,000 Java packages, amounting to over 8% of the Maven Central repository (the most significant Java package repository), have been impacted by the recently disclosed log4j vulnerabilities (1, 2), with widespread fallout across the software industry. The vulnerabilities allow an attacker to perform remote code execution by exploiting the insecure JNDI lookups feature exposed by the logging library log4j. This exploitable feature was enabled by default in many versions of the library.This vulnerability has captivated the information security ecosystem since its disclosure on December 9th because of both its severity and widespread impact. As a popular logging tool, log4j is used by tens of thousands of software packages (known as artifacts in the Java ecosystem) and projects across the software industry. User's lack of visibility into their dependencies and transitive dependencies has made patching difficult; it has also made it difficult to determine the full blast radius of this vulnerability. Using Open Source Insights, a project to help understand open source dependencies, we surveyed all versions of all artifacts in the Maven Central Repository to determine the scope of the issue in the open source ecosystem of JVM based languages, and to track the ongoing efforts to mitigate the affected packages.How widespread is the log4j vulnerability?As of December 16, 2021, we found that 35,863 of the available Java artifacts from Maven Central depend on the affected log4j code. This means that more than 8% of all packages on Maven Central have at least one version that is impacted by this vulnerability. (These numbers do not encompass all Java packages, such as directly distributed binaries, but Maven Central is a strong proxy for the state of the ecosystem.)As far as ecosystem impact goes, 8% is enormous. The average ecosystem impact of advisories affecting Maven Central is 2%, with the median less than 0.1%. Vulnerability Patching
GoogleSec.webp 2021-12-17 21:11:02 Apache Log4j Vulnerability (lien direct) Like many other companies, we're closely following the multiple CVEs regarding Apache Log4j 2. Our security teams are investigating any potential impact on Google products and services and are focused on protecting our users and customers.We encourage anyone who manages environments containing Log4j 2 to update to the latest version.Based on findings in our ongoing investigations, here is our list of product and service updates as of December 17th (CVE-2021-44228 & CVE-2021-45046):Android is not aware of any impact to the Android Platform or Enterprise. At this time, no update is required for this specific vulnerability, but we encourage our customers to ensure that the latest security updates are applied to their devices.Chrome OS  releases and infrastructure are not using versions of Log4j affected by the vulnerability.Chrome Browser releases, infrastructure and admin console are not using versions of Log4j affected by the vulnerability.Google Cloud has a specific advisory dedicated to updating customers on the status of GCP and Workspace products and services. Google Marketing Platform, including Google Ads is not using versions of Log4j affected by the vulnerability. This includes Display & Video 360, Search Ads 360, Google Ads, Analytics (360 and free), Optimize 360, Surveys 360 & Tag Manager 360. YouTube  is not using versions of Log4j affected by the vulnerability. We will continue to update this advisory with the latest information. Vulnerability
GoogleSec.webp 2021-12-16 17:28:18 Improving OSS-Fuzz and Jazzer to catch Log4Shell (lien direct) Posted by Jonathan Metzman, Google Open Source Security TeamThe discovery of the Log4Shell vulnerability has set the internet on fire. Similar to shellshock and heartbleed, Log4Shell is just the latest catastrophic vulnerability in software that runs the internet. Our mission as the Google Open Source Security Team is to secure the open source libraries the world depends on, such as Log4j. One of our capabilities in this space is OSS-Fuzz, a free fuzzing service that is used by over 500 critical open source projects and has found more than 7,000 vulnerabilities in its lifetime. We want to empower open source developers to secure their code on their own. Over the next year we will work on better automated detection of non-memory corruption vulnerabilities such as Log4Shell. We have started this work by partnering with the security company Code Intelligence to provide continuous fuzzing for Log4j, as part of OSS-Fuzz. Also as part of this partnership, Code-Intelligence improved their Jazzer fuzzing engine to make it capable of detecting remote JNDI lookups. We have awarded Code Intelligence $25,000 for this effort and will continue to work with them on securing the open source ecosystem.Caption: OSS-Fuzz and Jazzer finding the Log4Shell VulnerabilityVulnerabilities like Log4Shell are an eye-opener for the industry in terms of new attack vectors. With OSS-Fuzz and Jazzer, we can now detect this class of vulnerability so that they can be fixed before they become a problem in production code.Over the past year we have made a number of investments to strengthen the security of critical open source projects, and recently announced our $10 billion commitment to cybersecurity defense including $100 million to support third-party foundations that manage open source security priorities and help fix vulnerabilities.We appreciate the maintainers, security engineers and incident responders that are working to mitigate Log4j and make our internet ecosystem safer. Check out our documentation to get started using OSS-Fuzz. Vulnerability
GoogleSec.webp 2021-12-14 13:01:03 Empowering the next generation of Android Application Security Researchers (lien direct) Posted by Jon Bottarini, Security Program Manager & Lena Katib, Strategic Partnerships ManagerThe external security researcher community plays an integral role in making the Google Play ecosystem safe and secure. Through this partnership with the community, Google has been able to collaborate with third-party developers to fix thousands of security issues in Android applications before they are exploited and reward security researchers for their hard work and dedication. In order to empower the next generation of Android security researchers, Google has collaborated with industry partners including HackerOne and PayPal to host a number of Android App Hacking Workshops. These workshops are an effort designed to educate security researchers and cybersecurity students of all skill levels on how to find Android application vulnerabilities through a series of hands-on working sessions, both in-person and virtual. Through these workshops, we've seen attendees from groups such as Merritt College's cybersecurity program and alumni of Hack the Hood go on to report real-world security vulnerabilities to the Google Play Security Rewards program. This reward program is designed to identify and mitigate vulnerabilities in apps on Google Play, and keep Android users, developers and the Google Play ecosystem safe. Today, we are releasing our slide deck and workshop materials, including source code for a custom-built Android application that allows you to test your Android application security skills in a variety of capture the flag style challenges. These materials cover a wide range of techniques for finding vulnerabilities in Android applications. Whether you're just getting started or have already found many bugs - chances are you'll learn something new from these challenges! If you get stuck and need a hint on solving a challenge, the solutions for each are available in the Android App Hacking Workshop here. As you work through the challenges and learn more about the techniques and tips described in our workshop materials, we'd love to hear your feedback. Additional Resources: If you want to learn more about how to prepare, launch, and run a Vulnerability Disclosure Program (VDP) or discover how to work with external security researchers, check out our VDP course here. If you're a developer looking to build more secure applications, check out Android app security best practices here. Vulnerability
GoogleSec.webp 2021-12-02 15:00:00 Exploring Container Security: A Storage Vulnerability Deep Dive (lien direct) Posted by Fabricio Voznika and Mauricio Poppe, Google Cloud Kubernetes Security is constantly evolving - keeping pace with enhanced functionality, usability and flexibility while also balancing the security needs of a wide and diverse set of use-cases.Recently, the GKE Security team discovered a high severity vulnerability that allowed workloads to have access to parts of the host filesystem outside the mounted volumes boundaries. Although the vulnerability was patched back in September we thought it would be beneficial to write up a more in-depth analysis of the issue to share with the community.We assessed the impact of the vulnerability as described in vulnerability management in open-source Kubernetes and worked closely with the GKE Storage team and the Kubernetes Security Response Committee to find a fix. In this post we'll give some background on how the subpath storage system works, an overview of the vulnerability, the steps to find the root cause and the fix, and finally some recommendations for GKE and Anthos users.Kubernetes Filesystems: Intro to Volume SubpathThe vulnerability, CVE-2021-25741, was caused by a race condition during the creation of a subpath bind mount inside a container, and allowed an attacker to gain unauthorized access to the underlying node filesystem and its sensitive files. We'll describe how that system is supposed to work, and then talk about the vulnerability.The volume subpath feature in Kubernetes enables sharing a volume in multiple containers inside a pod. For example, we could create a Pod with an InitContainer that creates directories with pre-populated data in a mounted filesystem volume. These directories can then be used by containers in the same Pod by mounting the same volume and optionally specifying a subpath field to limit what's visible inside the container.While there are some great use cases for this feature, it's an area that has had vulnerabilities discovered in the past. The kubelet must be extra cautious when handling user-owned subpaths because it operates with privileges in the host. One vulnerability that has been previously discovered involved the creation of a malicious workload where an InitContainer would create a symlink pointing to any location in the host. For example, the InitContainer could mount a volume in /mnt and create a symlink /mnt/attack inside the container pointing to /etc. Later in the Pod lifecycle, another container would attempt to mount the same volume with subpath attack. While preparing the volumes for the container, the kubelet would end up following the symlink to the host's /etc instead of the container's /etc, unknowingly exposing the host filesystem to the container. A previous fix made sure that the subpath mount location is resolved and validated to point to a location inside the base volume and that it's not changeable by the user in between the time the path was validated and when the container runtime bind mounts it. This race condition is known as time of check to time of use (TOCTOU) where the subject being validated changes after it has been validated.These validations and others are summarized in the following container lifecycle sequence diagram. Vulnerability Uber
GoogleSec.webp 2021-11-11 13:13:06 ClusterFuzzLite: Continuous fuzzing for all (lien direct) Posted by Jonathan Metzman, Google Open Source Security TeamIn recent years, continuous fuzzing has become an essential part of the software development lifecycle. By feeding unexpected or random data into a program, fuzzing catches bugs that would otherwise slip through the most thorough manual checks and provides coverage that would take staggering human effort to replicate. NIST's guidelines for software verification, recently released in response to the White House Executive Order on Improving the Nation's Cybersecurity, specify fuzzing among the minimum standard requirements for code verification.Today, we are excited to announce ClusterFuzzLite, a continuous fuzzing solution that runs as part of CI/CD workflows to find vulnerabilities faster than ever before. With just a few lines of code, GitHub users can integrate ClusterFuzzLite into their workflow and fuzz pull requests to catch bugs before they are committed, enhancing the overall security of the software supply chain.Since its release in 2016, over 500 critical open source projects have integrated into Google's OSS-Fuzz program, resulting in over 6,500 vulnerabilities and 21,000 functional bugs being fixed. ClusterFuzzLite goes hand-in-hand with OSS-Fuzz, by catching regression bugs much earlier in the development process.Large projects including systemd and curl are already using ClusterFuzzLite during code review, with positive results. According to Daniel Stenberg, author of curl, “When the human reviewers nod and have approved the code and your static code analyzers and linters can't detect any more issues, fuzzing is what takes you to the next level of code maturity and robustness. OSS-Fuzz and ClusterFuzzLite help us maintain curl as a quality project, around the clock, every day and every commit.”With the release of ClusterFuzzLite, any project can integrate this essential testing standard and benefit from fuzzing. ClusterFuzzLite offers many of the same features as ClusterFuzz, such as continuous fuzzing, sanitizer support, corpus management, and coverage report generation. Most importantly, it's easy to set up and works with closed source projects, making ClusterFuzzLite a convenient option for any developer who wants to fuzz their software. Uber
GoogleSec.webp 2021-11-01 12:41:31 Trick & Treat! 🎃 Paying Leets and Sweets for Linux Kernel privescs and k8s escapes (lien direct) Posted by Eduardo Vela, Google Bug Hunters Team Starting today and for the next 3 months (until January 31 2022), we will pay 31,337 USD to security researchers that exploit privilege escalation in our lab environment with a patched vulnerability, and 50,337 USD to those that use a previously unpatched vulnerability, or a new exploit technique.We are constantly investing in the security of the Linux Kernel because much of the internet, and Google-from the devices in our pockets, to the services running on Kubernetes in the cloud-depend on the security of it. We research its vulnerabilities and attacks, as well as study and develop its defenses.But we know that there is more work to do. That's why we have decided to build on top of our kCTF VRP from last year and triple our previous reward amounts (for at least the next 3 months).Our base rewards for each publicly patched vulnerability is 31,337 USD (at most one exploit per vulnerability), but the reward can go up to 50,337 USD in two cases:If the vulnerability was otherwise unpatched in the Kernel (0day)If the exploit uses a new attack or technique, as determined by GoogleWe hope the new rewards will encourage the security community to explore new Kernel exploitation techniques to achieve privilege escalation and drive quicker fixes for these vulnerabilities. It is important to note, that the easiest exploitation primitives are not available in our lab environment due to the hardening done on Container-Optimized OS. Note this program complements Android's VRP rewards, so exploits that work on Android could also be eligible for up to 250,000 USD (that's in addition to this program).The mechanics are:Connect to the kCTF VRP cluster, obtain root and read the flag (read this writeup for how it was done before, and this threat model for inspiration), and then submit your flag and a checksum of your exploit in this form.(If applicable) report vulnerabilities to upstream.We strongly recommend including a patch since that could qualify for an additional reward from our Patch Reward Program, but please report vulnerabilities upstream promptly once you confirm they are exploitable.Report your finding to Google VRP once all patches are publicly available (we don't want to receive details of unpatched vulnerabilities ahead of the public.)Provide the exploit code and the algorithm used to calculate the hash checksum.A rough description of the exploit strategy is welcome.Reports will be triaged on a weekly basis. If anyone has problems with the lab environment (if it's unavailable, technical issues or other questions), contact us on Discord in #kctf. You can read more details about the program here. Happy hunting! Uber
GoogleSec.webp 2021-10-28 13:00:00 Protecting your device information with Private Set Membership (lien direct) Posted by Kevin Yeo and Sarvar Patel, Private Computing Team At Google, keeping you safe online is our top priority, so we continuously build the most advanced privacy-preserving technologies into our products. Over the past few years, we've utilized innovations in cryptographic research to keep your personal information private by design and secure by default. As part of this, we launched Password Checkup, which protects account credentials by notifying you if an entered username and password are known to have been compromised in a prior data breach. Using cryptographic techniques, Password Checkup can do this without revealing your credentials to anyone, including Google. Today, Password Checkup protects users across many platforms including Android, Chrome and Google Password Manager.Another example is Private Join and Compute, an open source protocol which enables organizations to work together and draw insights from confidential data sets. Two parties are able to encrypt their data sets, join them, and compute statistics over the joint data. By leveraging secure multi-party computation, Private Join and Compute is designed to ensure that the plaintext data sets are concealed from all parties.In this post, we introduce the next iteration of our research, Private Set Membership, as well as its open-source availability. At a high level, Private Set Membership considers the scenario in which Google holds a database of items, and user devices need to contact Google to check whether a specific item is found in the database. As an example, users may want to check membership of a computer program on a block list consisting of known malicious software before executing the program. Often, the set's contents and the queried items are sensitive, so we designed Private Set Membership to perform this task while preserving the privacy of our users. Protecting your device information during enrollmentBeginning in Chrome 94, Private Set Membership will enable Chrome OS devices to complete the enrollment process in a privacy-preserving manner. Device enrollment is an integral part of the out-of-box experience that welcomes you when getting started with a Chrome OS device. The device enrollment process requires checking membership of device information in encrypted Google databases, including checking if a device is enterprise enrolled or determining if a device was pre-packaged with a license. The correct end state of your Chrome OS device is determined using the results of these membership checks.During the enrollment process, we protect your Chrome OS devices by ensuring no information ever leaves the device that may be decrypted by anyone else when using Private Set Membership. Google will never learn any device information and devices will not learn any unnecessary information about other devices. ​​To our knowledge, this is the first instance of advanced cryptographic tools being leveraged to protect device information during the enrollment process.A deeper look at Private Set MembershipPrivate Set Membership is built upon two cryptographic tools:Homomorphic encryption is a powerful cryptographic tool that enables computation over encrypted data without the need f Tool
GoogleSec.webp 2021-10-27 15:41:13 Launching a collaborative minimum security baseline (lien direct) Posted by Royal Hansen, Vice President, Security According to an Opus and Ponemon Institute study, 59% of companies have experienced a data breach caused by one of their vendors or third parties. Outsourcing operations to third-party vendors has become a popular business strategy as it allows organizations to save money and increase operational efficiency. While these are positives for business operations, they do create significant security risks. These vendors have access to critical systems and customer data and so their security posture becomes equally as important.Up until today, organizations of all sizes have had to design and implement their own security baselines for vendors that align with their risk posture. Unfortunately, this creates an impossible situation for vendors and organizations alike as they try to accommodate thousands of different requirements.To solve this challenge, organizations across the industry teamed up to design Minimum Viable Secure Product or MVSP – a vendor-neutral security baseline that is designed to eliminate overhead, complexity and confusion during the procurement, RFP and vendor security assessment process by establishing minimum acceptable security baselines. With MVSP, the industry can increase clarity during each phase so parties on both sides of the equation can achieve their goals, and reduce the onboarding and sales cycle by weeks or even months.MVSP was developed and is backed by companies across the industry, including Google, Salesforce, Okta, Slack and more. Our goal is to increase the minimum bar for security across the industry while simplifying the vetting process.MVSP is a collaborative baseline focused on developing a set of minimum security requirements for business-to-business software and business process outsourcing suppliers. Designed with simplicity in mind, it contains only those controls that must, at a minimum, be implemented to ensure a reasonable security posture. MVSP is presented in the form of a minimum baseline checklist that can be used to verify the security posture of a solution.How can MVSP help you?Security teams measuring vendor offerings against a set of minimum security baselinesMVSP ensures that vendor selection and RFP include a minimum baseline that is backed by the industry. Communicating minimum requirements up front ensures everyone understands where they stand and that the expectations are clear.Internal teams looking to measure your security against minimum requirementsMVSP provides a set of minimum security baselines that can be used as a checklist to understand gaps in the security of a product or service. This can be used to highlight opportunities for improvement and raise their visibility within the organization, with clearly defined benefits.Procurement teams gathering information about vendor servicesMVSP provides a single set of security-relevant questions that are publicly available and industry-backed. Aligning on a single set of baselines allows clearer understanding from vendors, resulting in a quicker and more accurate response.Legal teams negotiating Data Breach
GoogleSec.webp 2021-10-27 15:01:30 Pixel 6: Setting a new standard for mobile security (lien direct) Posted by Dave Kleidermacher, Jesse Seed, Brandon Barbello, and Stephan Somogyi, Android, Pixel & Tensor security teams With Pixel 6 and Pixel 6 Pro, we're launching our most secure Pixel phone yet, with 5 years of security updates and the most layers of hardware security. These new Pixel smartphones take a layered security approach, with innovations spanning across the Google Tensor system on a chip (SoC) hardware to new Pixel-first features in the Android operating system, making it the first Pixel phone with Google security from the silicon all the way to the data center. Multiple dedicated security teams have also worked to ensure that Pixel's security is provable through transparency and external validation. Secure to the Core Google has put user data protection and transparency at the forefront of hardware security with Google Tensor. Google Tensor's main processors are Arm-based and utilize TrustZone™ technology. TrustZone is a key part of our security architecture for general secure processing, but the security improvements included in Google Tensor go beyond TrustZone. Figure 1. Pixel Secure EnvironmentsThe Google Tensor security core is a custom designed security subsystem dedicated to the preservation of user privacy. It's distinct from the application processor, not only logically, but physically, and consists of a dedicated CPU, ROM, one-time-programmable (OTP) memory, crypto engine, internal SRAM, and protected DRAM. For Pixel 6 and 6 Pro, the security core's primary use cases include protecting user data keys at runtime, hardening secure boot, and interfacing with Titan M2TM. Your secure hardware is only as good as your secure OS, and we are using Trusty, our open source trusted execution environment. Trusty OS is the secure OS used both in TrustZone and the Google Tensor security core. With Pixel 6 and Pixel 6 Pro your security is enhanced by the new Titan M2TM, our discrete security chip, fully designed and developed by Google. In this next generation chip, we moved to an in-house designed RISC-V processor, with extra speed and memory, and made it even more resilient to advanced attacks. Titan M2TM has been tested against the most rigorous standard for vulnerability assessment, AVA_VAN.5, by an independent, accredited evaluation lab. Titan M2™ supports Android Strongbox, which securely generates and stores keys used to protect your PINs and password, and works hand-in-hand with Google Tensor security core to protect user data keys while in use in the SoC. Moving a step higher in the system, Pixel 6 and Pixel 6 Pro ship with Android 12 and a slew of Pixel-first and Pixel-exclusive features. Enhanced ControlsWe aim to giv Malware Vulnerability
GoogleSec.webp 2021-10-05 09:00:00 Google Protects Your Accounts – Even When You No Longer Use Them (lien direct) Posted by Sam Heft-Luthy, Product Manager, Privacy & Data Protection Office What happens to our digital accounts when we stop using them? It's a question we should all ask ourselves, because when we are no longer keeping tabs on what's happening with old accounts, they can become targets for cybercrime.In fact, quite a few recent high-profile breaches targeted inactive accounts. The Colonial Pipeline ransomware attack came through an inactive account that didn't use multifactor authentication, according to a consultant who investigated the incident. And in the case of the recent T-Mobile breach this summer, information from inactive prepaid accounts was accessed through old billing files. Inactive accounts can pose a serious security risk.For Google users, Inactive Account Manager helps with that problem. You can decide when Google should consider your account inactive and whether Google should delete your data or share it with a trusted contact. Here's How it WorksOnce you sign up for Inactive Account Manager, available in My Account settings, you are asked to decide three things:When the account should be considered inactive: You can choose 3, 6, 12 or 18 months of inactivity before Google takes action on your account. Google will notify you a month before the designated time via a message sent to your phone and an email sent to the address you provide.Who to notify and what to share: You can choose up to 10 people for Google to notify once your Google Account becomes inactive (they won't be notified during setup). You can also give them access to some of your data. If you choose to share data with your trusted contacts, the email will include a list of the selected data you wanted to share with them, and a link they can follow to download that data. This can include things like photos, contacts, emails, documents and other data that you specifically choose to share with your trusted contact. You can also choose to set up a Gmail AutoReply, with a custom subject and message explaining that you've ceased using the account.If your inactive Google Account should be deleted: After your account becomes inactive, Google can delete all its content or send it to your designated contacts. If you've decided to allow someone to download your content, they'll be able to do so for 3 months before it gets deleted. If you choose to delete your Google Account, this will include your publicly shared data (for example, your YouTube videos, or blogs on Blogger). You can review the data associated with your account on the Google Dashboard. If you use Gmail with your account, you'll no longer be able to access that email once your account becomes inactive. You'll also be unable to reuse that Gmail username.Setting up an Inactive Account plan is a simple step you can take to protect your data, secure your account in case it becomes inactive, and ensure that your digital legacy is shared with your trusted contacts in case you become unable to access your account. Our Privacy Checkup now reminds you to set up a plan for your account, and we'll send you an occasional reminder about your plan via email.At Google, we are constantly working to keep you safer online. This October, as we celebrate Cybersecurity Awareness Month, we want to remind our users of the security and privacy controls they have at their fingertips. For m Ransomware
Last update at: 2024-05-16 11:08:14
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter