RIPE 81

Archives

Routing Working Group session
.
28 October 2020
.
11 a.m. CET
.

PAUL HOOGSTEDER: Good morning Routing, Working Group at RIPE 81. Can we have slide 2, please?
.
I'd like to welcome you all on behalf of myself and my co‑chairs, Ignas and Job, who is also here. Please remember that the chat is for chatting, the Q&A is for asking questions especially at the end of the presentations, and you can use the audio queue for that as well.
.
The minutes for RIPE 80 have been published. Let's get on to our agenda for today.
.
Two presentations: One about IRRd by Job, and then one about RPKI by Nathalie.

JOB SNIJDERS: The two of us are the driving forces behind the development of the Internet Routing Registry daemon, also known as IRRd. IRRd is a critical component for the Internet core routing as many transit and Internet Exchange providers use the IRRd software to interface with the routing system information that is available as Route‑6 or AS‑SET objects.
.
IRRd cannot just be used to publish routing information, but also to consume routing information, and IRRd is the principal interface for software such as PE‑VIL, BGP Q3 and BGP Q4, all of which are softwares that are used under the hood in many EBGP prefix list as generation processes running throughout the Internet.
.
The development principles between IRRd 4 is that it should be liberally licensed, so that anybody can use it for any purpose. We want end‑to‑end testing through regression testing. There is a hundred percent coverage in terms of unit testing, extensive documentation is that ensures that any user of the software can understand what the software does and why and we take great pride in the quality of the software as it is today.
.
So a little bit about history on what this IRRd project is and how it came to be. Decades ago, the world was running on IRRd version 2 and version 3. Both were softwares written in C, and they were very hard to extend. If you would change a tiny thing over here, other functionality would fall apart and adding security features to the IRRd tool in IRRd software bases turned out to be very complicated. In fact, it was so complicated that rewriting the software from scratch in a new programming language was less of an effort than revitalising this ‑‑ and a result of this effort is IRRd 4. IRRd 4 has all the same features and functionalities and some of the cores of previous versions of IRRd. So for all intents and purposes, it looks very similar, but it's not. Under the group it is an entirely different code base and that moment where we switched over from IRRd 3 to IRRd 4, even though nothing changed for the outside world, it was a very exciting moment to see if our IRRd reimplementation had been a faithful reproduction of what IRRd does for the Internet.
.
Now, this software was finalised in early 2019 and released to the world in May 2019, and subsequently deployed in many places, including entities public‑facing IRRd services, but also at the RIR level such as LACNIC or ARIN use the IRRd 4 software.
.
So, now that we have a code base that we understand is all documented, well‑tested, both in unit and regression testing, what is it that we can do with this? We can leap forward, and this is IRRd 4.1. IRRd 4.1 represents the first feature release based on the IRRd 4 code base where the focus is not to reproduce the behaviourisms of previous versions of IRRd, but leverage this new code base and innovate based on the code base.
.
So what IRRd 4.1 brought us was capabilities to produce synthetic NRTM feeds. This is a feature that is useful for some RIRs where the authoritative database is actually internal database, internal to the internal RIR. This could be, for instance, an RPKI base database or a database that is dependent on some portal the RIR is offering. This release also had significant performance improvements compared to IRRd 4, and, all in all, this was the truth of a very big project to even get to a position where we could start innovating in the IRRd space again.
.
You may ask why is innovation in this legacy ancient, old‑fashioned, traditional technology even relevant? The reason that innovation in the IRR still is important is that we somehow have to pave a pathway from our IRR focused world to an RPKI‑focused world, and by making the IRRd software which sits in the middle by IRR clients and routing databases, if we make this more RPKI aware, we can hopefully eventually move to a world where the IRR can no longer negatively impact our routing. For that, we would rely exclusively on the RPKI.
.
Some of you may recall that, in the last two years, a big project has been underway in the RIPE community to apply the RPKI region validation procedure as outlined in RFC 6811 to the RIPE non‑authoritative database. The RIPE non‑authoritative database is a historic artifact which predominantly comes from a time where all inet‑nums were moved to AFRINIC and that was substantiated but not the associated route or Route‑6 objects. This resulted in a curious situation where there was authorisation if the resource was RIPE‑managed, but if the resource was not managed by RIPE NCC, anybody could create any route object.

Now the RIPE database RIPE database was split into an authoritative database which strictly follows the authorisation rules outlined in the RIPE database model and a RIPE non‑authoritative database which contains all objects for which we could not affirmatively concluded that they were created with the consent of the owner.

For that thinking we thought we to use RPKI information to deduct which of the route objects could be deleted or not, where if a route object stored in the RIPE non‑authoritative database is in conflict with published RPKI information, we know that the published RPKI information came from the resource holder and if the RIPE non‑authoritative object is in conflict with that, apparently the resource holder does not want such objects to exist. As such objects describe a state of routing which is impossible from the perspective of rejecting RPKI on your eBGP sessions.
.
So long story short, this policy was captured in a document called RIPE‑731, and this functionality which exists in the RIPE RIS software for the RIPE non‑authoritative database is now available to everyone through the IRRd integration. And this is fully automated and enabled by default which makes it super easy to use.
.
So let's take as an example some route objects as they currently exist. On the bottom of the screen you can see a /15 representation of an RPKI ROA, which was transferred to a VRP and then loaded into the IRRd, and this RPKI ROA allows 61.112.0.0 /15 up through a max length of /24. And the router check that you see at the top of the screen, a /16 matches in terms of the authorised origin and matches in terms of the L O L prefix length. So the RPKI origin validation state of that particular route object is valid and as such this route object is available to everybody appears in queries.
.
Another example is 93.175.147.0 /24. This is a representation of an RPKI ROA. This is not a real route object. This is a synthesised route object.
.
Now, if somebody were to try to publish an IRR route object that is in conflict with the published RPKI ROA, the below bottom route object is invalid because the RPKI ROA did not list AS12654 as an authorised origin for that specific prefix.
.
In the previous world, the world before IRRd 4.1, such objects would just exist in the IRR ecosystem and would make it into prefix filters, and this is detrimental to Internet operations. So what IRRd 4.1 does is, it periodically imports all RPKI ROAs that exist on the five RIRs, it uses this information to apply the origin validation procedure to all route objects both IPv4 and IPv6 route objects, it is aware of, and then based on that, it will suppress or delete or reject objects that are in conflict with published RPKI information.
.
All the benefits that exist for the RIPE 731 procedure now will also exist for all IRR databases because the IRRd instances are used to mirror and aggregate various databases and presented via a uniform interface to clients that generate the filters.
.
So, they apply an RPKI filter mechanism in the middle between the client and the databases. The databases don't need to support RPKI. The client doesn't need to support RPKI. But this layer in the middle, the IRRd, performs origin validation, hides conflicting information from clients.
.
The implementation has been done in such a way that if information comes in over an NRTM feed, anything that is in conflict will be marked as invalid and be suppressed if the IRRd instance is passing this information along to other instances or to clients such as BGP Q3 or BGP Q4.
.
If you attempt to make an object through, say, the e‑mail interface or an API code, such objects, if they are in conflict with published RPKI information, will be rejected and the user will be presented with a clear error that shows that they neither need to update the RPKI ROA to ‑‑ or add an RPKI ROA to permit this route object to exist, or a more likely scenario that people realise that there was a typo in the route object they attempted to create and they were perhaps accidentally trying to create a route object which converted space belonging to other organisations.
.
If we take a look at how much RPKI invalid information exists in the ecosystem, the score is actually quite good except for the two largest IRR databases on the planet which are RADB and NTTCOM. RADB is a lead polluter with conflict and information, where about 10% of route objects that exist in RADB are RPKI invalid and NTTCOM is second request 6.68%.
.
So, this shows that there is a lot of still rogue typos, fat‑fingered information in the IRR, but since there never are incentives to clean up old IRR information, such information continues to exist, and this is why an automated process that is reliable, that is industry‑wide adhering to a standard, in this case we follow the philosophy of rejecting RPKI valid route announcements on EBGP borders and apply that same philosophy as it propagates through the system.
.
So, over time, you will see these numbers drastically reduce as more and more IRRd operators upgrade to IRRd 4.1, which performs this type of filtering on behalf of the operator.
.
RPKI information, if it exists, is the most afforded source about a resource holder's routing intentions. If there are no RPKI ROAs covering a certain route object, nothing happens to that route object, the route object is not found. So just like with BGP routing, if there is no RPKI information, RPKI will not play a role in any decision‑making related to that object.
.
In a total ecosystem, there are almost 200,000 invalid route objects from an RPKI perspective, so this ‑‑ this is, in absolute numbers, quite large. But in relative numbers, it's only 5% of route objects that exist. So if we compare this to the early days of when we had to deploy origin validation in the BGP control plane, the number of invalids seemed problematic, but when we delve into it and analyse how much traffic is actually flowing towards those objects, are those objects maybe covered by less specific than are not found. Are those objects actually problematic? Then we find that this is not the case in the vast majority.
.
Yes, this is a very big migration. It's a big step. It has huge impact, but we should consider it sort of a spring cleanup. Once you do the spring cleanup, for months to come you enjoy the benefits, and in the case of IRRd 4.1 RPKI capabilities, once we do this spring cleanup, going forward, the cleanup process happens automatically all the time and we do not need to worry about it ever again.
.
So, IRRd 4.1 brings RPKI validation to the IRR ecosystem and I believe this to be a phenomenal improvement over the current state of affairs in the IRR. This RPKI protection mechanism operates regardless of the RPKI capabilities and an IRR source, regardless of the policies. It really is designed in such a way to protect the consumer of IRR data such as BGP Q3 or BGP 4.

SASHA ROMUN: So that's IRRd 4.1, which was released very recently but we are continuing to work towards IRRd 4.2, the next release, where we are going to focus on query and update interfaces. It has currently the IRRd taken version 2 at 3, something that many of you will be somewhat familiar with. This prototype has a lot of down sides. It has no authentication, it is all based on plane text. There is no support for anything like SSL to even verify whether the data you received is authentic before loading it into your routers. And there are different output formats that are completely incompatible with each other. There are inconsistent formats for queries, it is poorly extendible for new things. We have done some hacks but none of it is really great. It's a fairly unpleasant interface to work with.

So for IRRd 4.2 we are working on a new interface based on graph QL, if you don't know it, it is a small layer on top of HTTP, it has a JSON‑inspired query language, which is pretty fast to learn, output it all JSON and it offers much more flexibility in terms of the kind of queries we can run, how you get the output format, what kind of things you are interested it, it also has support for look ups through graphs, because essentially IRR data is kind of like a graph but not really, but it's close enough that this gives us benefits. You can interface with it with GraphQL client libraries, there are many of those in, but it's a small layer, so you should be able to work it out with a simple HTTP library.
.
I'll show you a demo of some of the things we can do with what we have so far.

GraphQL comes with a playground, so the GraphQL comes with a playground which I'm showing here. In as an example of the GraphQL query, I am showing it because it's simple. These are all the fields that we're querying for, if you are on it you basically get back JSON with all the details in this case the database status of all the databases known to this instance. You can also query RPSL objects, in this case I am running a query for any object whose primary key is AS 29 looking forecast global. You can see from the highlight here that this is an API that has a full schema defined, so this playground knows that this parameter is ‑‑ actually wants a string or you can put in multiple. So I can run that, and you can see that you basically get the parse data that IRRd has produced in the MNT BIND. You can actually query the original object text, the auto complete it not entirely working yet. So this is the actual object text like you would get from WHOIS. But IRRd already extracts a lot of fields. So, it knows, for example, that MNT BIND is one or more, so it's a list, and it's already filtered out the individual items. So you don't have to parse RPSL as much any more.
.
I can also do things to say like if it's an AS‑SET, then also retrieve the members of that set. Remove object text. And then you can see that the individual members are available here and also this is parsed regardless whether they are one line or multiple lines, these are the actual members of the object.

Then because this is sort of a graph, you can actually also dig deeper and say resolve each member object and then find the primary key of each of those objects and the members does have in turn. Then what you get here is that this object has a reference to AS 2914 AS Asia, which in turn has these members.
.
You then actually keep digging deeper because this is kind of a graph. So we can also say from those member objects, resolve the members that are part of that and then resolve their maintainer objects and then get the primary key here as well, so we know what the object is. And then get the notified attribute.
.
This takes a lot longer because this beneficial in pretty deep and this is still at an experimental stage. So in this case you can basically dig all the way through the IRR graph. This is not what you would use for actual full set resulting because it's more complex, they can be expressly GraphQL. Like I said, IRR data is kind of a graph but not really. Anyway, it allows you to dig pretty deep.
.
In our example, this is a query that looks for all objects that are maintained by my maintainer, and you can see a few, there is an auth num, there is a maintainer, there is a key CERT object, a row Inet6num, but also you can do things like each object has a journal. So, I can query the serial, the time stamp and the origin and the operation of the history of all these objects and you can see here that my auth num in the local IRRd database only queries the local instance was fairly recently updated because the mirror sent an at or update and the other objects said nothing and also you can combine query parameters, so you could say I also want everything that has an object class which includes auth num and Inet6num. You can add lots of other filters. You could filter on having certain members, on contacts, on sources, IP matches and basically a lot of things you can do in current WHOIS, but in a more flexible way. And this is already running as an experimental interface, so if you are interested in playing with this and seeing what it can do, then ping me or Job and we'll get you the details of how to access that.
.
We're also interested in what people think might be useful in additional methods to query things. And just any other thoughts you have about this new query interface. There is documentation for it also, because it does take a bit of getting used to, but once you do, it's a very powerful.
.
So we have just set up an IRRd community contract with the support of NetNod and the Swedish Internet Foundation, and what this means is that we can support anyone deploying IRRd for whatever purpose, and help out with noises or feature requests or bugs or issues in the documentation, any kind of thing that community members, the IRRd community as a whole run into. There is some limits to how much support we can give, but there are a lot of options, so, if this applies to you, then always feel free to make issues in the GitHub repository about anything that you run into, any kind of support you need with deploying IRRd, and we can see what we can do for you.

PAUL HOOGSTEDER: Thank you, Job and Sasha, from the past. Let's see if we have got any questions. Nothing in the Q&A. People can also ask audio questions. Nothing? Well, thanks, Job and Sasha.

Can we get back to the agenda? That seems to be a problem. Is Nathalie here for the next agenda item? There she is.

NATHALIE TRENAMAN: Good morning, everyone. Let me share my screen. Do I really want to share my screen? Yes, I do. Let's see...
.
So, screen is looking good, I'm here, I think you can hear me ready to go.
.
So, good morning, everybody, and thank you for giving me the opportunity to present on a project that we have been quite busy with recently. My name is Nathalie Trenaman, and I am the routing security programme manager at RIPE NCC.
.
As, you know, RPKI is booming and the RIPE NCC operates one of the five trust anchors for RPKI. That means that we have to make sure that the trust anchor is stable, resilient and secure. And in the last year, we have made a lot of progress on a probably that we call the RPKI resiliency project, and it consists of the following areas, and I will talk about each of these areas in a bit more detail in this presentation.
.
First, one of the things that we have to ask ourselves is: Is what we're doing the right way of doing it? Is our certificate authority actually secure? And what can we improve?
.
And when you start thinking about these kind of questions, you discover that there are hundreds of audit frameworks out there to make assessments or audits about these kinds of questions.
.
So we looked long and hard because none of these established audit frameworks completely encompasses everything in RPKI. There are the traditional IT security frameworks, there are some PKI frameworks, but nothing really towards RPKI. So, we were looking out there for a well‑known IT security framework that has some flexibility to add the specificalities of RPKI.
.
In the end, we came to the conclusion that a SOC 2 Type II audit framework would have those elements. So it is quite a well‑known framework, it has the standard IT security elements, and you can tailor it towards something else.
.
I'll get to that, the BSI.
.
Because we can't do this ‑‑ make this audit framework completely ourselves. We have to work with a standards body that helps us put together this framework so that other trust anchors can also use it in the future, potentially, if they want. So, we signed a contract with the British Standards Institution ‑ that's a well‑known organisation that develops standards ‑ to develop such an RPKI audit framework.
.
So, with this SOC 2 Type II, we have the flexibility to tailor it towards RPKI, and what it also allows, and, for me, this is one of the best things, is we will be able to produce a SOC 3 report, and what is a SOC 3 report? It's quite a detailed report of the findings of the actual audit. And that is good news for the community, because we want to be as transparent as where we can possibly be.
.
So that is the ultimate goal, is to provide transparency on what we have learned, what we can improve, and we want to share that with you, so you can benefit from that as well.
.
But first we have to go back to this framework, and that is quite a lot of work to put that all together. We plan to do that early '21. We are quite ambitious working towards that, and hopefully that's done early '21.
.
Now, BSI is also an organisation that can perform the actual audit, but, as you might know, it is bad practice to have the same company do an audit that actually built the audit framework, of course, so that is why we will look for another organisation that can do the actual audit. Now, that is not too difficult, because SOC 2 Type II is quite well known amongst IT auditors, so that allows for that as well.
.
Now, what is, by default, included in such a SOC 2 Type II? As you can see, that's security, which is for RPKI incredibly important, there is availability; also very important, because we see this critical infrastructure. The integrity, the confidentiality and the privacy. While there is no real privacy sensitive data in RPKI objects as such, all the stuff around it is ‑‑ well, we will look at access control and to factor authentication where we have to do that.
.
So, that's SOC 2 Type II. Be ready for more updates in next sessions.
.
Then another thing that we were wondering after ten years of running this trust anchor and certificate authority, to which extent do we comply to the RFCs of RPKI and the crypto stuff? To how far do we comply and did we interpret all the RFCs correctly? Because, as you know, some RFCs, they are not that straightforward to take as the truth in one way.
.
So, that is when we started looking for an organisation that could help us do an assessment on to check these things, and that's not that easy, because RPKI is a very special type of animal. But we found great partnership in Radically Open Security, which is a security firm based here in the Netherlands. They have a lot of expertise, and they were helping us out to do this assessment. They did that in August and September of this year, and the report was delivered early October 2020.
.
Now, I have to say I was quite impressed with the level of detail in the report and the recommendations that they gave, because, well, there is a lot to work on. But first, the good news.
.
The result of the assessment was mostly positive. So, the implementation of the RPKI core complies with the RPKI RFCs to a high degree, although some issues are present. I'll get to that.
.
The code base of the RPKI core and the publication server are of high quality. That's always nice to hear. But we have some area for improvement there as well.
.
So, what are we going to do for next year? We will do a crystal box penetration testing, we'll start with that, and we will hire an external company to do a red team test, because the only way to know if you can withstand an attack is to actually have an attack and then test it, fix it, test it again.
.
So, that's something that we are going to do. We are quite curious to see what comes out of that.
.
We also need to talk to some open source maintainers, because one of the things in the report was that some of the codes, the open source code that we use can be improved in some areas, so we plan to contribute to that code as well.
.
And then we plan to do these assessments also on a more regular basis, to make it more part of the flow.
.
So that was really a useful exercise.
.
Then a completely different beast is our CPS, or Certificate Practice Statement. Not many people know this, but when you run a certificate authority, it is good practice to have a public Certificate Practice Statement. It is quite a lengthy document, I think it's over 40 pages, and we wrote it in 2012 when we started and we never updated since because it's quite a big document.
.
So it was time to have a complete rewrite of RIPE 549, which is our current CPS.
.
Now, we didn't have to start completely from scratch because, again, there is an RFC for that, that is the RPKI CA CPS template, 7382, that is a good help as a starting point. We changed a lot of bits and pieces ‑‑ a lot of new RFCs have been published since RIPE 549, so we have to include them as well, and we finished the rewrite last week, and this has been a five‑month project. So now the CPS is now with our legal tracker mortgage to have a final review. Legal has been involved since day one in the review, but there are some clauses at the end of RFC 7382 that we have to look a bit further at.
.
After the legal review, we will pass it on to comms, the communications team, for a final spell‑check and grammar check and then we publish it before the end of the the year under a new RIPE document number.
.
Now, this involved work from a lot of people; like I said, the legal team but also the operations team because data centres are referenced in the CPS, etc., and access to those data centres and of course we could not have done this without the help of Tim Bruijnzeels, who used to work with us in the early days of RPKI and now moved to NLnet Labs and we asked him for some historic insights here and there, so thank you Tim for that.

Then, another part of the project is the redundancy and resiliency of our publication servers. In RPKI, you have two different repositories; you have Rsync repositories and RRDP repositories and we run both. We run them differently. The Rsync repositories is currently in‑house and then the RRDP publication server is already in the Amazon Cloud.
.
We are currently working on deploying Rsync repositories in the Cloud as well, but we will make a better more resilient architecture so spread it over multiple regions, have it available of course over v4, IPv6, etc. So, that is the plan first. First Rsync to the Cloud.
.
Now the thing is if you completely rely on the Cloud, you are still not fully resilient because what if Amazon suffers from a catastrophic failure? Then, in that case, we have to row back, so build a backup plan so we can failover to our own infrastructure.
.
After Rsync is completely moved to the Cloud, then we are going to re‑look at RRDP that is currently in Amazon in the AWS Cloud, but we will take the same architecture as we will have for Rsync. So multi‑regions, multiple availability zones, etc.
.
So, I see a question from Robert that I will take on right away, repository in ‑‑ only rather multi‑vendor Cloud? Yes, we will have one Cloud provider, with you also a complete backup at home. So, that is our plan forward.
.
What else?
.
Yes, then last but not least, the monitoring and alerting. You heard that we have suffered some outages earlier in the year and that learned us that there was a lot to improve in this area of monitoring and alerting. We defined and added more and better metrics, for example, one of the metrics is now if we're having a big transfer and the current policy and [‑] is that we remove all the ROAs in case of a transfer. Now, if you have a really large transfer with a lot of ROAs, I'm not going to give you the exact number, but now we get an alert that this is happening, that we're about to delete an X amount of ROAs and intervention is needed. So, we tuned a lot of these metrics that are also tied to the registry.
.
Now, with monitoring you are never fully done, and that means that everything, you build something new, you have to make new metrics, have another look at the monitoring, what should you look at, what should you take into account? So this part of the project is never completely finalised. But I think we're on the right way here for the big overhaul of this.
.
Finally, I think I still have two minutes, so I'm going to show you a little bit of time‑lines that we have planned this work for.
.
So, the first one is the operational procedures. This is the BSI Type II SOC ‑‑ SOC 2 Type II stuff. So as you can see, we are now building ‑‑ we selected a third party, and now we're going to build the audit framework. Due to Covid, we had a little bit of delay earlier in the year working with parts because we looked at multiple parties, not just BSI, but we plan to catch up on this so we are in the right timeline again.
.
And then the actual audit for next year. The legal framework, that means the CPS. CPS is well underway, almost done, and after that we're going to have another look at the terms and conditions. This is just a standard procedure that we check every year, so that shouldn't be too much work.
.
The RFC clients' assessment, that was the Radically Open Security story I told you, and we now only have left to implement those findings.
.
And then the technical infrastructure, that is a bit the Cloud stuff, so the monitoring overhaul is now completely done, we keep tuning and fine‑tuning, and then of course we have the automated infrastructure and provisioning, that is more the Cloud work that is all ongoing, and then next year core resiliency and quality and assurance. I see there are three questions in the queue, which is a great moment, because I'm happy to take questions. Thank you.
.
Questions:
.

PAUL HOOGSTEDER: Nathalie, I think you can just read out the questions and answer them directly.

NATHALIE TRENAMAN: Okay. So you want me to go to the Q&A.
.
Okay.
.
Did you consider adopting one of the standards currently used to audit the web PKI, the web trust or the ETSI one? If you did, why were they rejected?
.
We did actually, we looked at two ETSI ones. I am really bad with remembering numbers. We did look at web trust and they do encompass a lot of elements that we were looking for. But again, these were not completely tailored towards RPKI. So, there is a little bit more to it than that. So, that is why we wanted to take it a little bit broader than just web trust or the ETSI ones, but we will definitely include some of the elements of web trust, actually quite a few, in the Type II SOC 2. So I hope that answers your question.

Then, Herman: RIRs have been implementing RPKI according to their understanding of the following RFCs and related definitions but not all aspects of the system are bound by formal standards. It's not easy to work together on this.

Yes, and the RIRs are actually working together. We have a body amongst ourselves on the engineering teams of the RIRs, it's called the ECG, Engineering Correlation Group, from the RIRs, and we do have regular meetings to discuss what we're doing and how we're doing things and what we can learn from each other. Plus, quite some people from those RIRs are also active in the CIDR Ops Working Group, which is the Working Group where the RPKI RFCs are being created. So, yes,
.
I would say your question is, is it not easy to work together on this?
.
Yes, we are working together on this.
.
I will ‑‑ I can't mark because I don't have access to Slido, I can't mark them as read, so you have to do that yourself or the chat monitor has to do that for me.

Rudiger Volk, hello; when will we hear about the RFC non‑compliance bit?
.
Yes, we will, I think you heard that yesterday as well, we hear ‑‑ we want to be as transparent as possible, but also that this, the Radically obscurity report was really a security report, so we have to fix some things there ASAP before the end of the year. As soon as that, we will close the findings of Radically Open Security, and we also will disclose those bits as well.
.
And then... let me see. Robert Scheck, another question: Well, I really dislike the idea of AWS only for RPKI prospective operations instead of using multiple Cloud providers regularly in parallel. If AWS poorly fails, the system means a larger outage until local recovery from backups happens.

I'm not really sure if moving between clouds is faster in case of a failover than moving back home. I don't know enough about the architecture to give you an answer tonne that. But if you drop me an e‑mail, I'm happy to find out for you.

And then the last one from Leo: Will the RPKI specific version of the audit standard be freely available or available at low cost for use by others?
.
Yes, we plan to make this Type II SOC 2, once we are done with it, available for the broader public, because this is member money that we're spending and I am a great believer in transparency for the community by the community from us. And that also means that there are a lot of trust anchors, other trust anchors out there that I would like to incorporate more assessments in audits like this. And also, there are people that are learning about running their own CAs and what comes with that, and the responsibilities that come with that, so, that might be useful for them as well. So, yes, I'll be as transparent as I possibly can. So watch this space, I will be back.
.
And I think that was the last question here in the question and answers. Did I miss anything, Paul?

PAUL HOOGSTEDER: I don't think you did. Thank you, Nathalie. And thanks for all the good questions.
.
This brings us to the end of this session. I hope to see you all back on the Routing Working Group mailing list. Any last words from Job or Ignas? No. Well thank you all.
.
(Lunch break)