ISC and the EU Cyber Resilience Act

Updates

Since the open source community started mobilizing to respond to the Cyber Resilience Act, there have been a number of excellent blogs on the topic. (these are listed with more recent towards the top)

What is the EU Cyber Resilience Act (CRA)?

The EU Cyber Resilience Act is new proposed legislation from the EU Commission that aims to improve the cybersecurity of products containing software, by requiring the application of a “CE Mark” for software. Compliance requirements vary based on whether the product is classified as “critical,” which is a good thing, but as far as I can tell, the open source that ISC publishes appears to fall into the most heavily regulated category under the act. There is an exemption for open source, but open source that has any associated commercial activity (meaning any significant funding stream to sustain it) is explicitly included in the regulation. (The definition of a product, and associated commercial activity, as well as the classification scheme, are areas that could use more clarification.)

For an excellent overview of the proposed legislation, take a look at Maarten Aertsen’s blog from November 2022.

We have some comments on the CRA

ISC, along with three other non-profit developers of open source infrastructure software–NLnet Labs, The Network Device Education Foundation, and CZ.NIC–have submitted a joint comment on the impending EU CRA, calling out some of our specific concerns about the legislation as it applies to Internet infrastructure. We invite you to read our joint response.

ISC Response to EU Cyber Resilience Act

Most of the objective requirements are reasonable best practices

To be clear, we are very committed to working towards high standards of cyber security in our software. At ISC, we are actually already meeting most of the specific requirements of the CRA, as far as vulnerability reporting, patching, public notifications, etc. We have well-established and -documented policies and procedures for vulnerability reporting and, unfortunately, we have a lot of experience in this area. Some of the CRA requirements are fairly reasonable, and in line with generally accepted best practices.

However, other requirements pose a problem for us, and others are completely unworkable for any open source project. For example, the CRA as currently drafted appears to prohibit our established practice of notifying the DNS root system operators, as well as some of our technical support subscribers and operating system packagers of our software, of security vulnerabilities before we make the vulnerability public. This advance notificiation is both a good thing for the security of the Internet, and helpful for ISC as a business practice, because it is a benefit for our supporters. Other requirements of the CRA, such as the prohibition against releasing any software with any known vulnerability, regardless of the timing of the report, or the severity of the vulnerability, are just impractical for a large project with frequent releases and many issues. The regulation refers to ‘putting a product on the EU market’ which is hard to interpret in the context of an open source project with an open repository where the software is always available to anyone on the Internet. There are many places where the CRA as written is unclear, and critical terms are inadequately defined.

However, even if we clarify or modify these requirements, compliance with this new regulation is not going to help ISC publish more secure software.

My main gripe about the CRA is that it is adding burdens to open source publishers in the name of increasing cybersecurity, when the main cause of poor open source software security is lack of resources. Once the CRA takes effect, from my reading of the CRA, ISC will have to pay third-party auditors to audit our development, documentation, testing, and release processes, at indeterminate intervals. Not only do we have to pay for this scrutiny, it may also delay releases until the auditors are satisfied. If this were an optional or consultative engagement, that might possibly be helpful, but as a mandatory requirement, it is definitely going to impose additional overhead while adding little value.

The CRA may be a necessary and effective step towards improving the cybersecurity aspects of commercial products, particularly the large numbers of unmaintained Internet of Things (IOT) devices that can form dangerous botnets and are the Achilles heel of the tech industry. However, the CRA has explicitly included developers of free open source in the scope of this regulation. While I agree that open source developers have an ethical responsibility for the quality of their output, this regulatory burden is misplaced and unhelpful. I would like to propose some alternative suggestions for how the EU could more effectively support production and implementation of secure open source.

What problems is the CRA addressing?

The CRA lays out two main categories of cybersecurity issues, or risks, that it intends to address:

“(1) a low level of cybersecurity, reflected by widespread vulnerabilities and the insufficient and inconsistent provision of security updates to address them, and (2) an insufficient understanding and access to information by users, preventing them from choosing products with adequate cybersecurity properties or using them in a secure manner.”

The Act goes on to propose remedies for these two problems. But what are these solutions?

1) Improvements to software quality

The remedy for the first problem, the presence of unmitigated known vulnerabilities in released software, consists mainly of a prohibition on putting any product on the market with known vulnerabilities. (Annex I, Section 1(2) prohibits the delivery of software with “known exploitable vulnerabilities.”) It is hard for me to read this without feeling insulted. Do these people not realize we are already working AS HARD AS WE CAN to prevent shipping insecure software? It appears they are trying to replace our expertise in triaging vulnerabilities and balancing the risks of security bugs with a blanket mandate. We do, in fact, try not to publish versions with known vulnerabilities, of course.

There are cases when we might, however, ship a version with a known or suspected vulnerability unpatched. Most often, this is because the vulnerability has just been reported and not yet been verified. Sometimes, the patch is not yet ready, and there are other things in the version that are urgently needed by our users. We have many vulnerabilities reported to us by bounty hunters that we judge to be very low severity, and we may prioritize fixing other more severe issues first. We have a ‘blackout’ period over the end of the year holidays when we try not to publish vulnerabilities, because many of our users cannot update then. In some cases, more systemic vulnerabilities require coordination with other developers in the DNS community and we cannot just rush out our patch if others are not ready. There are nuances to consider that the CRA does not account for.

Not shipping software with known vulnerabilities is not nearly as hard if you don’t also actively look for vulnerabilities. To make a halfway-decent stab at this, you need at least:

  • a quality testing process that includes fuzzing, static analysis, system testing, and unit tests to find bugs before release. (of course if you don’t look for vulnerabilities, then there are none to disclose…)
  • a well-advertised mechanism for external researchers and users to report security bugs. (Having a broad and sophisticated user base deploying and observing the software in multiple different environments and applications is extremely helpful for vulnerability discovery.)
  • resources for triaging and reproducing reported security issues promptly and thoroughly.
  • adequate resources and knowledge to produce, verify, and release a fix or mitigation for the vulnerability.
  • a design and code-review process that screens new code for potential issues.
  • ongoing creation of new regression tests to prevent recurrence of old defects.

All of these critical functions require highly skilled, committed, and available engineering staff, as well as the supporting infrastructure and tooling they need to be productive. The CRA presumes that the only barrier to shipping secure software is a lack of organizational will, which it mandates via regulation. This might be an issue with some commercial products, but a lack of will on the part of its maintainers is hardly the biggest obstacle to improving the security of open source in general. Most open source projects are not profit-generating, so software security risks are not traded off versus revenue goals. Gaps in security processes in open source are much more likely to be due to practical issues, like resource constraints or lack of knowledge, than any lack of motivation.

One big difference between open source and closed source is, there are a lot more security researchers looking for vulnerabilities in the open source. The CRA deals only with known vulnerabilities, but there are lot more unknown, undiscovered vulnerabilities in the less-examined closed source.

Compliance regulation is the wrong approach

The CRA does nothing to help any organization struggling with the real task of producing secure open source. Producing quality open source software requires resources, and nothing in the CRA helps to make those resources available to non-profit open source publishers. In fact, compliance with the Act will only add burdens on open source developers that will effectively reduce the resources available for improving quality and software security.

The Act makes a point of calling out open source software as a different category, exempting open source unless it is distributed in association with some commercial activity, and explicitly including technical support as a commercial activity. However, the authors did not consider the specific challenges and opportunities in developing and releasing secure open source, which could be leveraged to improve cyber security.

I am as wary as the next person of the behemoths in the tech world, but the fact is, some of the most useful programs for actually supporting open source come from them. Google has their OSS Fuzzing project, and Amazon and Fastly have programs to provide free or discounted services for open source. Github has continuous vulnerability scanning and notification for software on their platform. Many open source projects use the free static analysis provided for open source projects for years by Coverity, now Synopsys. Are these crumbs from the tables of the giants? Maybe so, but the point is, they are highly leveraged and objectively useful in improving the quality and security of open source software. ISC’s development infrastructure is almost entirely based on open source, including the Mattermost chat application and the GitLab version control system, both of which are sustained by their commercial operations. These are all helpful resources that, in one way or another, help us to produce more usable, better-quality, and more secure open source. The CRA as a compliance-focused regulation would be more effective in addressing cyber resilience if it were accompanied by some programs like these, to provide practical assistance to open source projects, most of which are struggling to sustain themselves.

2) Access to information

The second major problem identified by the CRA is education, an “insufficient understanding and access to information by users.” The CRA includes requirements for documentation of secure deployment practices and secure default configurations, which are perfectly good recommendations. The CRA also includes some confusing language about mandated exploit and vulnerability reporting, which will probably be clarified in the future. The CRA does mention that developers should track and report on the software components included in products, which is undeniably important for operators to be able to tell what vulnerabilities they may be exposed to, (but the industry doesn’t yet have a fully specified standard for how to do this). However, when it comes to open source, considering we are only discussing known vulnerabilities in the CRA, it hardly seems likely that the biggest obstacle for our users is lack of knowledge about the existence of these vulnerabilities.

The proposed regulation does not address the more serious problem, of the slow application of available security fixes–and lax software maintenance in general–in production deployments. Researchers have reported that even some very well-publicized vulnerabilities, such as the Log4js vulnerability, or even Heartbleed, went un-patched for far too long in production deployments.

Here again, the reality is that keeping up with the flood of vulnerability reports and resulting patches is simply a lot of hard work, and many organizations don’t have the resources or the skills to do it. Without a doubt, this is one of the main drivers of the popular adoption of cloud-based services, which shift the system maintenance burden onto an external vendor. The CRA is not wrong to require that product vendors provide decent product documentation and notify users of known severe defects, but this alone is insufficient to solve the problem of the many unpatched systems out on the Internet. In fact, to the extent that the CRA regulations may discourage the production of pre-compiled binaries for their users (because those would be additional products subject to compliance regulation), it may make it harder for users to keep their systems updated.

The EU could attempt to lower the burden on systems maintainers, by for example funding an automated, machine-readable subscription-based service for vulnerability notifications and patches. Collecting information from EU organizations on what open source they have deployed would provide some valuable data on where the biggest exposure might be: because open source software is freely distributed and downloaded, nobody has good data on how widely deployed open source packages are.

The EU could maintain a repository of links to ‘approved’ open source versions. It could sponsor research and production of easy-to-apply guidance on systems security. It might even be useful to maintain a central directory of open source publishers to help identify downstream suppliers. There is a requirement to provide SBOMS in the CRA. Software Bills of Materials (SBOMs) may eventually help implementers discover vulnerable products in their networks, but the standards and technology aren’t quite mature enough yet for mandatory compliance requirements to be helpful. The industry could use some focus on nailing down a fully specified standard as soon as possible, so this information can be made available universally in a machine-readable format; some free open source tools to help implementers identify vulnerable products in their network would also be useful.

If the approach has to involve compliance and regulation, the EU could consider imposing some requirements on the organizations who deploy open source or who reuse it in commercial products, instead of adding to the burden on open source producers. These could include requirements for implementers to support the open source they use, to patch known severe vulnerabilities, or to run only actively maintained versions. Perhaps these requirements could be tied to access to cybersecurity insurance or some best practices rating system, to help nudge management into allocating resources towards system maintenance.

Summary

Nothing in this regulation will improve the cybersecurity of open source. Some provisions will waste scarce resources, and in a few cases, the regulations may conflict with our established best practices. However, the bigger question is, what could the European Union do that would help improve cyber-resilience in general, and open source software security in particular?

  1. Sponsor or provide highly leveraged services or open source tools and infrastructure for assuring software quality. These could include security audits; fuzzing, scanning, and analysis tools; or hosting or computing resources.

  2. Provide funding for ongoing maintenance of important open source software, or encourage and incentivize the implementers and users of open source to contribute appropriately to the common goods they are benefiting from. There is funding in the EU for open source projects, but of all the open source development grant offerings I have seen, exactly zero of these aims to fund maintenance or quality operations for established projects.

  3. Provide guidance for businesses and consumers about cyber-resilience best practices, based on peer-reviewed research in the field. ISC’s BIND 9 users can, for example, consult the NIST Secure Domain Name System (DNS) Deployment Guide and associated Secure Technical Implementation Guidelines (STIG) checklists. Per the CRA, ISC would have to provide this advice ourselves, and to be honest, we are probably not the experts in this area.

  4. Provide an optional self-certification process that characterizes the software vendors’ practices with regard to security vulnerabilities in a way that is both transparent and easy for users to consume. An example of such a program is the Linux Foundation’s OpenSSF Best Practices Badge. One way to incentivize implementers to select software that follows best practices with regard to cybersecurity might be with underwriting for cyber insurance that provides cost incentives for best practices that reduce risk.

  5. Research how best to assist implementers in reducing the burden of ongoing system maintenance, and invest in those solutions. These will likely include some sort of system for monitoring standardized SBOMs and a subscription-based vulnerability notification system. A vulnerability notification system that permitted or even required enterprises to report the open source they are using, in exchange for getting prompt notifications of vulnerability would also provide the EU with a census of open source usage in the community.

  6. Consult with the open source community in developing a plan to regulate it. Perhaps this should have been my first suggestion?


What happens next?

The regulation will take effect in 24 months. Between now and then, there is an extensive process of developing, in each EU member country, the local regulations to implement the Act, including any “harmonized” standards that might apply. The Commission will also have to solicit and train an army of auditors to run the compliance process. There should be opportunities to engage with the regulatory authorities in each EU country over the coming year, and we hope open source users as well as developers participate in this process. The EU Commission working on this regulation has been engaging with the public in multiple fora to explain the regulation and has been open to comments on the impact of the regulation. We are pleased to see that the upcoming FOSDEM conference will include a panel discussion on the EU Cyber Resilience Act that will include several of the Act’s authors, as well as a talk on the related Product Liability Directive.

Recent Posts

What's New from ISC

Happy holidays from ISC!

ISC is fortunate to have staff members in so many different countries around the world: our software development benefits from all the different perspectives - and we benefit personally!

Read post