Feedback from The Apache Software Foundation on the Free and Open Source Security Audit (FOSSA)
Background
Two of those people were Julia Reda and Max Andersson; Members of the European Parliament. As a result they proposed (and directed Europe to fund) a pilot project: the "Free and Open Source Software Audit (FOSSA)" within a larger workstream that was about "€1 million to demonstrate security and freedom are not opposites".
Audit Process
Feedback on FOSSA
Security Reports
.... results = (results_t *) mallocOrDie(sizeof(results_t)); results->sum = 0; for(int i = 0; i < ptr->array_len; i++) { results->sum += ptr->array[i]; ....
People and Community versus tools
Secondly there is the process of impact and the cost of dealing with the report and changes. Often the report will find a lot of 'low' issues and perhaps one or two serious ones. For the latter it is absolutely warranted to 'light up' the security response of an open source project; and have people rush into action to do triage, fix and follow up with responsible disclosure.
Given that the code is already open source, the same cannot be said for the 'low' issues. Generally anyone (bad actors and good actors) can find these too. So in a lot of cases it is better to work with the community to file these as bug reports; or even better - as simple issues usually have simple non controversial fixes, submit the fixes and associated test cases as contributions. (It is often less work for the finder of the bug to submit a technical patch & test case than to fully write up a nicely formatted PDF report)
Bug Bounties - a Panacea ?
- Fees are not high enough for the expert volunteers one would need to be enticed by the fee alone `in bulk'.
Take the recent Azure-Linux update reporting or the Yahoo issue as examples. 5 to 10k is unlikely to come even close to the actual out of cost of a few weeks to a few months of engineering time at that quality level (or compensating the years invested in training) that was required to find, analyse and report that issue.
- The same applies for the higher `competition' fees - topping out at 30-100k. In those cases only the first to report gets it. So your actual payment-per-issue found is lower on average; with some 4 to 8 top global teams at this level and with 2 to 4 high-value target events per year - that works out at well below 8k/teammember per year on average.
- The very best people will only engage in this as a hobby and (hence) for personal credit and pride; OR when they work for a vulnerability company that wants the PR and marketing.
BUT that means that it is personal credit & marketing that is the real driving value, not the money itself. So what then happens if we introduce money into this (already credit and marketing driven) situation?
- Very large numbers of people without sufficient skill may be tempted --- but then one has to worry about the impact on the open source community: is dealing with reports at that level a better time spend for volunteers than having insiders look for things ? Will time spent on these fixes distract from the important things ?
Should we ask people to pre-filter; or ask people managing bug hunting programmes to pre-vet or otherwise carry an administrative burden ? (Keep in mind that there are third party bug-hunting programmes for Apache code that the Apache Software Foundation has no control over).
- It is likely that `grunt' and `boring' work in the security area will suffer --- `let that be done by paid folks';
- It fundamentally shifts the non-monetary (and monetary - but not relevant as too low) reward from writing secure/good code and caring/maintaining --- to the negative - finding a flaw in (someone else) code. So feel-good, job-well-done and other feedback cycles now bypass primary production processes (that of writing good code), or at the very least, make that feedback loop involve a bug bounty party.
So ultimately - it is about the risks of what Economists call "Externalisation"; making a cost affects a party who did not choose to incur that cost - or denying that party a choice how to spend their resources most effectively.
Summary and suggestions for the next FOSSA Audits
- Submitting the results of automated validation (even with some human vetting) is generally a negative contribution to security.
- Submitting a specific detailed vulnerability that includes some sort of analysis as how this could be exploitable is generally a win.
- Broad classes of issues which (perhaps rightly!) give you hits all over the code base are generally only worth the time spent on them if there are additional resources willing to work on the structural fixes, write the test cases and test them on the myriad of platforms and settings -- and if a lot of the analysis and planning for this work has been done prior to submitting the issue (to generally a public mailing list).
From this it also follows that narrow and specific (and hence more "new" and "unique") is generally more likely to increase overall security; while making public the results of something broad and shallow is at best not going to decrease security.
- Lighting up the security apparatus of an open source project is not 'free'. People are volunteers. So consider splitting your issues into: ones that need a responsible disclosure path; and ones that can go straight to the public lists. Keep in mind that, as the code is open source, you generally can err towards the open path a bit - other (bad) actors can run the same tools and processes as you.
- Consider raising the bar; rather than report a potential vulnerability - analyse it; have the resources to (help) solve it and support the community with expensive things; such as the human manpower for subsequent regression testing, documentation, unit tests or searching the code for similar issues.
- Security is a process; over very long periods of time. So consider if you can consistently spend resources over long periods on things which are hard to do for (isolated) volunteers. And if it is something like comprehensive fuzzing, code-coverage, condition/exchange testing - then consider the fact that it is only valuable if it is; a) done over long periods of time and b) comes with a large block of human manpower that do things like analyses of the results and updates of test cases.
- Anything that increases complexity is a risk; and may have long term negative consequences. As it may lead to code which is harder to read, harder to maintain or where the pool of people that can maintain it becomes disproportionally smaller. A broad sweeping change that increases complexity may need to be backed by a significant (5.10+ years) commitment of maintenance in order to be safe to implement; especially if the security improvement it brings is modest.
- Carefully consider threat model and actors when you are classing things a security hole - especially around APIs.
- Carefully consider what type of resources you want to mobilise in the wider community; and what incentivises the people and processes that are most likely to improve the overall security and safety. And take the overall, longterm, health and social patterns of the receiving community into account when there such forces for good are "external". It is all to easy to in essence to in effect cause a "Denial of Service" style effect; no mater how well intentioned.
- World-class expertise is rare; and by extension - the experts are often isolated. Bringing them together for long periods of time in relatively neutral settings gives synergy which is hard to get otherwise. Consider using a JRC or ENISA setting as a base for long term committed efforts. An effort that is perhaps more about strengthening and improving large scale (IT) infrastructures and (consumer) safety - rather than security.
- Bug bounties are not the only option. Some open source communities have benefited from "grants" or "stipend"; where a specific issue got tackled or addressed. In some cases, such as in for example Google its Summer of Code - it is focused on relatively young people; and helps train them up; in other cases it gives established experts room for a (few) year(s) to really bottom out some long standing issue.
With respect to the final point - security engineering (and its associated areas; such as privacy, trust and so on) is a "hard" thing to hire; the market generally lacks capacity and capability. Also in Europe.
While open source its access to `lots of eyeball's does help; it does not magically give us access to a lot of the right eyeballs.
Yet increasing both Capacity and Capability in society does help. And that is a long process that starts early.