RIPE 81

Archives

Database Working Group session
.
29th October 2020
.
4 p.m. CET
.

WILLIAM SYLVESTER: Welcome everyone to the next exciting chapter of the Database Working Group. Before we get started, I just wanted to go over a few housekeeping things.
.
We have an audio queue, to ask questions directly. Click on the mic icon so get the audio queue. You'll need to state your name and affiliation out loud. So please, everyone, let's keep it like we were in person.
.
For Q&A, always write your name and affiliation, type your question in the Q&A window. Questions will be read out to the presenter.
.
Chat with the group or with individuals and participants. It's all built into the software.

The stenography is available in Meetecho. Just as a reminder also, the sessions are recorded and will be published on the the RIPE 81 website archives.
.
So with that, we have an exciting agenda today.

First up, we're going to have operational updates from RIPE on the database from Ed. We're going to go over our current open proposals and our working items. And with the Cloud migration, we have a great presentation on that up ahead, and then we have a little time left over at the end, hopefully we can talk about anything else we need to cover.
.
Also, as another housekeeping item, we do still have one Chair slot open. In November, we're intending to open that up for nominations. If you are interested, please reach out to the Chairs. We'd be happy to talk you through through it and explain what's involved. But with that, Ed, why don't you take over.

ED SHRYANE: Thanks a million. My name is Ed Shryane I am a senior technical analyst at the RIPE NCC and here is the operation update from the RIPE database team.
.
We are now a team of five, and thank you to my colleagues for their hard work over the last number of months since May that contributed to this update.
.
So, first up, progress since RIPE 80.
.
We have had two WHOIS releases: 197.2 at the beginning of July, and firstly we improved the inetnum status validation to align with RIPE policy. This was in cooperation with the registry services team.
.
And secondly, we removed some obsolete inetnum statuses that are no longer used in the database. We then released 1.98 at the beginning of October. We implemented NW1‑11 to automatically Punycode internationalise the domain names and we had a cleanup of data for consistency the
.
Secondly, we had ‑‑ we implemented numerous improvements to the RDAP service. This the standardised across RIR query protocol, and, for example, we re‑ enabled the entity search with a limit of 100 matches, to strike a balance between a useful search and a NAT overloading query server, and we ‑‑ we are now returning a remark in the RDAP when went the validation has failed.
.
As usual, release notes are on our website.

There were two WHOIS outages since May. There was a partial network outage on the 22nd/23rd August, caused by a network problem which caused a small fraction of traffic to be discarded by an edge router and a planned upgrade a week later should prevent this from happening again. We also ‑‑ a second lesson learned, we also improved our external monitoring and alerting, so the 24/7 team are notified sooner of external issues connecting to WHOIS.
.
Secondly, the web application was unavailable for 90 minutes on the 1st October due to a manual DOH employment of WHOIS. Another lesson learned was, always use the automated deployment process, which does the right thing in this situation.
.
Some updates on some of the recently‑introduced WHOIS features. So two policy implementations. Last year, 2017‑02 regular abuse‑c validation, so there are now more abuse‑c addresses, just about 90,000 of them. And there are much fewer invalid addresses, so only about 2,000 versus 6,000 this time last year.
.
Secondly, the non‑auth route and Route‑6 clean up. We are now deleting about 100 Route‑6 objects a month, and we expect that to continue.
.
Thirdly, NWI‑8, synchronising LIR portal users to the default maintainer in the RIPE database, we have nearly 500 LIR organisations now synchronising users to the RIPE database, and that's up from 50 this time last year.
.
Finally, the NWI 9, we now have 6 more NRTM clients compared to March.
.
It's nice to see some uptake on the Open NRTM service.

Some progress since the last RIPE meeting on locked person objects. We have now implemented a clean‑up of locked person objects, which were referenced from IPv4 assignments, and this constituted the vast majority, around 90% of all of the locked persons.
.
We have now unlocked 600,000 person objects; that is to say, we have updated the maintainer to the assignment or the LIR maintainer. But since then, we have discovered that only about 1,500 of those persons were deleted and about 500 of them have been updated, and it seems surprisingly few given there is 600,000 person objects involved and these objects have not been updated since at least 2010.
.
So the remaining work on this project are to notify the LIR organisations who have been reassigned to the persons, and ask them to validate the contact details for their resources. And secondly, to clean up the remaining locked person objects, and there are now roundabout 13,000 of these locked person objects remaining, half of them from inetnums, but not allocations or assignments, and they are the more difficult references to these locked persons that we'll have to go through on a case‑by‑case basis.
.
And secondly, to point out that about 1,000 of these locked persons are now unreferenced so they are eligible to be cleaned up automatically after 90 days.
.
UTF‑8 in the RIPE database. Denis, in August, requested the RIPE NCC to perform a full investigation of the feasibility of UTF‑8 in the RIPE database, and there is an investigation in progress with a Labs article to follow. But for now I wanted to give a quick update on what I have discovered so far.

Firstly, within the RIPE NCC, we have internal procedures and internal applications which do convert to Latin 1 before adding to the ‑‑ before the gate arrives in the RIPE database, and that would need to be overhauled if a UTF‑8 were to be added.
.
Secondly, to summarise the community discussion on where we should allow UTF‑8 in the database and why. And secondly, and another aspect is to look into how do we prepare clients for UTF‑8 as we currently only return Latin 1 to clients.
.
And finally, this migration, supporting UTF‑8 is not a technical issue. That the WHOIS application and database can already handle UTF‑8. But one aspect in particular that the full tech search engine that we have for word‑matching is better suited to fuzzy word‑matching. And so similar words as accented and unaccented words, it's a good engine to use for matching rather than the database directly.
.
So, upcoming changes:
.
Numbered work items. Denis will go through those in a moment and, for the Cloud migration, my colleague
Q. Sander is presenting on that.

For WHOIS, the Database Working Group have requested us to implement some additional validation when creating a maintainer to require an MNT suffix. The majority of these maintainers are created by two RIPE NCC processes, either the new member at membership application through the LIR portal, or the maintainer role page in the web application. And we will be adding some more syntax checks on the front end to make that easier for users once this goes into production.

Secondly, a client certificate authentication, I mentioned that at the last RIPE meeting. We are waiting for the security review before we put that into production.

And thirdly, the domain object cleanup that was also mentioned at the last RIPE meeting, it involves around 1800 domain objects which are missing an end server attribute, so they are now syntactically correct and we will be following up in the maintainers in those objects to have them updated.
.
And on the website, we will ‑‑ we are prioritising these changes based upon feedback from the RIPE NCC survey. Firstly, we are improving the user interface with a new website template to early next year. You can already see these in the RIPE Atlas and RIPE Stat websites and they will be coming to the RIPE database website soon also. In particular, we expect to have user navigation around the site and better mobile support.
.
And finally, we are planning to improve the search page. So the existing search page is not straightforward to use, in particular all the options that are available, so we're looking at ways to improve that. We're working on the functionable design of that now and we'll be ready to implement that once the improved user interface is ready.
.
And lastly, I'd like to mention the RIPE NCC certified professionals. It's a way to certify your skills by taking an online exam. It was launched in April. The RIPE database associated exam was made available and three vouchers were sent to each LIR, but these vouchers expire at the end of this year, so if you are not aware and you'd like to take the exam, please make use of your vouchers.
.
That's my presentation. Thanks very much. Any questions for me? Okay, I don't see any questions in the Q&A panel.

WILLIAM SYLVESTER: Thank you, Ed. Hang on one second. Dmitry is asking for audio for a question, so I'll grant him audio.

DMITRY SCHERBAKOV: It would be correct asking about all site or only database working? You can hear me?

WILLIAM SYLVESTER: Yes.

DMITRY SCHERBAKOV: Would it be correct asking about all RIPE site or only database working? Now, do you not understand me?

WILLIAM SYLVESTER: Can you state your name and affiliation and ‑‑

DMITRY SCHERBAKOV: My affiliation, Dmitry...

WILLIAM SYLVESTER: And were you asking about the website?

DMITRY SCHERBAKOV: Yes. It's not correct now?

WILLIAM SYLVESTER: What's your question?

DMITRY SCHERBAKOV: My question is about the matters of General Meeting, maybe it's need to do something working for easy ‑‑ working for the future discussions. Not with what was already done.

WILLIAM SYLVESTER: It's probably not appropriate right now.

DMITRY SCHERBAKOV: Okay.

WILLIAM SYLVESTER: All right. Thank you. Any other questions for Ed? Denis is going to talk about our open proposals in our working items.

DENIS WALKER: Okay. I am one of the co‑chairs of the Database Working Group. I'll just bring up my share screen, I think, first, isn't it? So this is a review of the open end of ‑‑ audio very bad ‑‑
.
Managing abuse contacts in the database, we had a discussion recently on the mailing list and a few people made the comment that if we need tools to manage the contacts, then maybe the whole system is too complicated. And they do have a valid point there.
.
So we asked the RIPE NCC to have a look at the design again, see if anything can be made easier in the way it's actually managed and used by resource holders.
.
Item 2, displaying history of objects in the RIPE database. There is an arbitrary limit on the history of operational objects available. If you delete an object and recreate it, when you look at the history you only see it back to that deletion point; you won't see any of the earlier versions of the object. We didn't have enough comments on the mailing list to actually have a clear view on how we move forward on this.
.
Also, the Database Requirements Task Force recently published a draft and they also had some questions on the history of data in the database. So we need your thoughts about this. What do you want from it? Is it useful? What do you use the history for? Which objects do you want to see? How much do you want to see? So if we can have some feedback from you on the mailing list, that would be very helpful.
.
3. AFRINIC homing. This was basically about moving ranked‑in database objects from the RIPE database to the AFRINIC database for any resources that are from the AFRINIC region. Historically, when AFRINIC was first set up, it didn't have a writing registry so we moved all the address space objects but all the routing database objects were left in the RIPE database. Now AFRINIC does have a routing registry, we want to try and move it there. I think from the RIPE community's perspective, everything we need to do has been done. The objects that are still remaining in the RIPE database are all under the source RIPE non‑auth, so there is a clear separation between those objects and the authoritative data in the RIPE database. So... (inaudible)
.
NWI‑4. Multiple status attributes. This is about when you make an assignment of a whole allocation. Because the address range or prefix for an INET number and inetnum object is the primary (inaudible) in the RIPE database, you can't have two objects with the same range or prefix. So, if you ‑‑ the only way you can do this at the moment is to split the allocation into two objects and create two assignments.
.
Somebody asked if we could align multiple status attributes, so within the inetnum object, for example, we could have status allocated and status assigned to show that the allocation has been assigned in its entirety.
.
Again, we didn't have enough comments on the mailing list to have a clear view how to move forward with this, so we would like some more feedback from you.
.
NWI‑6: Applicable data model. This was a bit complicated, it was about objects in the database that are not syntactically correct and having some reference back to the version of syntax that was applicable at that time. There were no comments at all made on it and the author agreed we should just cancel it. So, NWI‑6 will be cancelled.

NWI‑8 is the LIR's SSO authentication groups. As Ed said, the stage 1 has been finished now so all the LIRs non‑billing users are contained in a default authentication group and a new authentication method to allow you to reference that group and maintain your object.
.
This is now deployed and people are using it. The question is: Do you want stage 2? This was with having user defined SSO groups. So instead of having all your non‑billing users authorising changes to the RIPE database, you could create a sub‑group of those users and reference that in the maintainer.
.
Would this would be something useful? 1. Do you find it useful? Do you think it's essential or should we just forget it? So, if you can give us some feedback on the mailing list on that one and we'll have an idea of whether to more forward with it or not.

NWI‑9. The inband notification mechanism ‑‑

WILLIAM SYLVESTER: Go ahead. All right.

DENIS WALKER: This was about having updates to objects pushed out to you or be able to pull out those updates, particularly, for example, ranked in database objects.
.
We opened up the NRTM service to everyone, and again, as Ed said in his update, a lot more people are now registered for NRTM and using it on a regular basis.
.
There was also talk of having a next generation of the NRTM protocol. No work has actually been done on that as yet. We suggest we close NWI‑9, and the RIPE NCC, or any members who want to get involved on that, can work on the next generation, the protocol, whatever time permits. If anyone is interested in that, perhaps you can contact Ed and talk to him about it.
.
NWI‑10. This is the long‑running saga on the definition of the country attribute or now attributes. The RIPE NCC has just published a RIPE Labs article on how that's the URL to find it. Leo Vegoda and I have already had a little conversation on that about one aspect of it, but I think it's probably better if you have any comments on this, to do it on the Database Working Group mailing list, which probably has a wider audience than the comments at the end of the RIPE Labs article. But it is going to be implemented quite soon.
.
NWI‑11, is the international domain names. As Ed said, Punycode has just been deployed so this is marked as finished.
.
If you want any further information on the NWIs, that's the URL where you will find them. You can see the problem statements, the suggested solutions and any assessments done by the RIPE NCC on what work is involved.
.
So, any questions? Was my audio working throughout that?

WILLIAM SYLVESTER: Your audio was a little choppy at times so we had to turn your video off.

DENIS WALKER: Right. That's the problem of using a little Chrome book.

WILLIAM SYLVESTER: Does anyone have any questions for Denis or anyone have any questions or comments on the NWIs? Any discussion? All right. Well, moving along.
.
All right. So up next we have Cloud migration, with Sander from RIPE NCC. So, with that, Sander, jump right in.

SANDER BUSKENS: Thank you very much. I hope my audio is okay, it's not cracking up.

WILLIAM SYLVESTER: You are good so far.

SANDER BUSKENS: Perfect. Okay, right, so that's my screen, I hope that's perfectly visible to everyone.
.
So, welcome, everyone, to the update for the Cloud migration of the WHOIS service. My name is Sander Buskens, I am working in the RIPE NCC, at the RIPE NCC database team. So I am working on all changes WHOIS‑related.
.
So, I am presenting on the proof of concepts for the WHOIS release candidates' environments, which is part of the company Cloud strategy that we mentioned in the last RIPE meeting. As a company, we are currently examining feasibility of moving some of our services to the Cloud. At the last RIPE meeting, we mentioned that we'd be looking at WHOIS environment as one of the services and the WHOIS in general. And the aims of the Cloud strategy is to improve the availability and the resiliency of our services.
.
So, for us, the release candidate environment, as you may know, is the environment that's publically available in which we deploy release candidate versions of WHOIS. So, whenever we develop new features, we deploy this to this environment first so the community can test out any changes to make sure that's working as expected, and there are no adverse effects of the changes. And for the Cloud, proof concept, we decided to move this ‑‑ release candidate environment into the Cloud.
.
The aims for this were to find out, we have like a current architecture in place for WHOIS, so of course we need to come up with an architecture that's suitable for the Cloud. So, moving this environment and having like a feasible environment up and running in the clouds enables us to set up this architecture. Then, of course, the aim was also to demonstrate the feasibility of moving WHOIS to the Cloud, see that it's all working.
.
Last, but certainly not least, to make sure that we size the environment properly to be able to handle the regular production loads that we normally see for WHOIS.
.
So, we were looking at it and some of the advantages that we see to moving to the Cloud is particularly in the area of flexibility. Resizing provision and services can be done fairly quickly. So it allows us to easier scale up and down as and when required. For example, during the proof of concept for the release candidates environment, we did quite a loss of tests on it and also checking to see which database sizes we needed, etc., etc., and all of this went fairly smoothly.
.
Also, cost optimisation, it allows us, when we run in the Cloud, scaling up and down allows us to run exactly what we need, and if we notice that we are oversized, we don't actually need certain things, we can scale down easier, and then whatever we don't need, we also don't pay for, so it helps in that particular area.
.
Ultimately, it allows us to focus on feature developments rather than infrastructure environment maintenance.
.
Furthermore, it gives us some operational improvements, because we tend to use managed services as much as possible for common infrastructural components or things like load balancers, day to day, etc.
.
Also, in the area of disaster recovery, we can host the RIPE database in different physical locations, so that improves things a lot.
.
And as we have already mentioned last time, we do aim to have an on‑premise instance of WHOIS running, in the event of a Cloud outage, so we always have like a backup available.
.
Furthermore, running in the Cloud should allow for us to improve the availability of the RIPE database.
.
Now, we went with the Amazon Cloud for a number of reasons, mainly technical and also somewhat organisational reasons. AWS is the biggest Cloud provider. It provides a large number of services, a lot of which we would probably need, like database, load balancer, maintainers. As a company, we support AWS. We have some in‑house production experience with AWS over the RPKI team, also a lot of engineers already have prior AWS experience, so it's expected for us we would make a faster progress when going with the Amazon Cloud.
.
And we have set up a Cloud team, a Cloud circle, where we have representatives from very disciplined within the RIPE NCC that are all working on Cloud migration initiatives, and the advantage to having this Cloud circle is that we can share knowledge on certain technical challenges that we bump into or Cloud best practices, like how would we set up certain things, so it's been a great help in having this.
.
Also, we have AWS implementation partner helping us with the Amazon clouds, and the implementation partner has given us access to the AWS solution architects, with which we had some validation sessions on the architecture, so we came up with a draft architecture and then we had a couple of sessions with the solution architects to discuss this architecture that we have. Also some Cloud principles and best AWS services, basically come up with an appropriate architecture for running a WHOIS in the Cloud.
.
Furthermore, we also had some Cloud training. We probably will do in the future.
.
Also, we spoke to the legal department for the legal considerations. For any service that we would be moving to the Cloud, we always do like a data classification which we look at the data that resides within this service, and assess the suitability of moving it to the Cloud. But more the RIPE database, as we all know there is a lot of personal data in the RIPE database so we need to take extra care when moving this data to prevent any leaks.
.
The legal department also reviewed the Amazon legal framework for us within the scope of the release candidates environment. And also, for the release candidates environment, we have a dummified version of the production database of the production data for which we stripped all the personal data and dummified it with, you know (something) value, basically.

At the last RIPE meeting, it was mentioned that in the description in the auth name attributes, we can still have potential personal data. As part of this review we actually looked at these attributes and improved the duplication process to also strip out these values.
.
And last but certainly not least, the WHOIS release candidate service in data will reside within the European region.
.
Now, in order to prevent the data, the personal data that resides in the database, we encrypted all this data, so it's all encrypted, and also when we have connections between the different applications, all of these connections are also encrypted in transit to protect the data as much as possible.
.
When we move things to the Cloud, before we do that, we'll be doing some extension pen testing of the services. Also, we will be doing some secure code reviews, looking at network security and taking advantage of some of the services provided by AWS, particularly in the area of credits and credentials management, particularly in SSL certificates, automatic rollover keys and these types of things, which are all a benefit of this platform.
.
Also provide extensive audit trail and infrastructural changes, so we can see why we change something, when we change something, etc., etc., so this helps.
.
And we looked at the AWS shared security responsibility model, which basically gives some what of a division between all of the security measures that we of course need to take into consideration when developing applications, deploying applications, etc., etc. But it does provide us with some advantage using a Cloud provider like AWS, because, for example, we use a management technical service, so the operating system level patching etc., etc. is done by AWS, and it's not something we need to look into. Of course we do then still have the matter of the containers themselves and potential vulnerabilities, but then AWS also provides some nice container vulnerability scanning to notify us of any of these types of things.
.
Now, if you look at the application itself, the application characteristics of the WHOIS landscape, the WHOIS application and on supporting applications that we also maintain, they are all Java‑based. They are very read‑intensive applications; we average about 1,000 queries per second and roughly one update per second. And the total database size is about 100 Gbs, so it's not too big in terms of databases.
.
So, in conclusion, the application is mostly very read intensive with limited writes to the database. So, that helps us when we need to scale up for example.
.
Now, looking at the current architecture, I hope this picture is clear to everyone. But basically what we have at the moment is the WHOIS application itself is a Java process which runs on a physical machine. Actually, for the production environment it's four physical machines, they are quite big, they use lots of CPU, lots of memory and they run the WHOIS service. And also, DB Web QI, which is basically front end for the application, all of these reside on one machine, of which we then have four instances, and also WHOIS internal, which is for some internal supporting processes, things like, for example, the abuse validation that we run.
.
Now, all the data stored in the VADB database and we have connections to other supporting services. Then in front of these applications we have some reverse proxies and a load balancer. One of the things about this current architecture is say, for example, we wanted to scale out, we would have to add the conditional machine which is a considerable effort. So, that's where we have now come up with this Cloud architecture, and this Cloud architecture, the way we have set it up at the moment all runs within an AWS region, a European region, in a virtual private Cloud in which we have multiple availability zones, and then inside these availability zones we have containerised the WHOIS application. In this picture I have only have actual choice application, not the web front. But we also maintain and are also part of this proof of concept that we have done. But they ‑‑ the architecture for that looks very similar to what we see here.
.
So, what we do is the application is being containerised the docket container that we then deploy onto the cluster, which is Amazon's container manager service, and basically we deploy these instances onto the cluster which is then distributed across the different availability zones, different physical data centres. In the case of any outage of any particular data centre, we always then have the other data centre which helps, of course, with the resilience of the application.
.
Furthermore, we have some EFS shares. We also have that in our current architecture and network file shares in which we store things like for example the application logs, the coded logging that we do for all of the incoming updates.
.
And also, for the full text search functionality that we have in the application, the indexes are ‑‑ they reside on this. So the current architecture as we have it at the moment within the Cloud, all of this is still on this. We are actually looking into also spinning up elastic search on managed cluster which we would then have the full text search functionality.
.
Of course we also have RDS database, and the database again in which we store the application data and VPN connection back to the home premise supporting services that we need.
.
And then when it comes to deploying our application onto, like, a new version of the application into this architecture, it's not included in the picture, but we ‑‑ on premise, we run a GitLab CICP pipeline in which we build the application, run all of the testing. Before WHOIS we have thousands of integration tests that we run in order to prevent any regression issues from popping up. And once all of this is successful, we also have some QAs running, some code metrics etc., etc., When all of this is successful and it's all passed, then we build application containers and we provision these to the clouds to the elastic container registry which resides in our VPC and then we notify the far great cluster of a new version of the application which means that the far greater cluster will basically do a rolling deployment, it will pick up the latest and greatest container image, bring up a couple of extra instances and when then all the load balances detect when the new load balances are up and running and everyone is happy, the old load instances will be decommissioned and that's how we do deployments within the Cloud architecture.
.
So, for the migration process, what we did is we built up the AWS environment or the infrastructure provision using Terraform, which is basically infrastructure as code, and so that helped us to quickly provision the infrastructure required it run the application.
.
We containerised the ‑‑ we mentioned the continuing deployment pipeline in which we had to do some stuff to get everything up and running. And last but not least, we did quite a lot of log testing. We took basically one hour of WHOIS production load and ran this against the AWS environment to make sure that we actually can support the normal regular production load. For this, we did have to experiment a little bit with different instance sizes of the application there of the RDS database tweaking some settings and also running more or less of the application instances. Of course, now we can support the full production loads.
.
Now we also looked at cost. Where it's important to say that cost is not the primary object of this exercise. The primary aim was to give availability or liability. But we did keep an eye on the cost areas and identified that the main cost is going to be in the RDS fire gate and, to a lesser extent, network storage. We're also looking at things like, for example, use and reserved instances by the time we know exactly what the environment is going to look like in terms of instances required to run. And also, we're experimenting a little bit with running non‑production environments only when we actually need them because wherever we don't run we do not pay for.

All of this is part of an internal cost review that we're doing of the Cloud centre and eventually this will lead to some costs that we can use for budging.

The next steps is, first of all, to ask you all of you to try out the WHOIS release candidate environment. It's still available at this link. It's not running on premise any more but it's running in the Cloud, and we are looking at some work in progress. Also, the IPv6, for example, and of course the failover that we are going to be running on premise like RIPE NCC, that is only failover environment setting that up. And then for the personal data limiting what the query limiting that we do, we have some personal object accounting. We still have some issues with the client IPs that we use for this object accounting.

Furthermore, we'll be looking at availability and recommending ‑‑ like I mentioned before, moving the full text stuff into a managed elastic search cluster.

When all of that is done, we will be looking at a production rollout, which we expect to do in the first half of 2021. As always we'll be presenting, or preparing a plan for this production rollout which will further elaborate on expected downtimes in terms of query downtimes and update downtimes. But for the query downtime, we actually expect to have no downtime whatsoever. But for the update downtime, there probably might be small window of this downtime.
.
So, that's the end of my presentation. My updates. We'd be happy to hear your questions and other feedback.

WILLIAM SYLVESTER: We have several questions online. Stavaros, we'll come back to your question because it dealt with Denis's presentation. [Dialo], I believe that Ed reached out to the directly. Wesel from Prefix Broker asked: If a move to the Cloud has cost savings, but a complete local environment is still calmed as fallback in case of Cloud outage, what is the exact cost of savings?

SANDER BUSKENS: The primary object of this exercise is not about Cloud cost savings; it's mostly about availability and resilience. We don't expect costs initially to go down.

WILLIAM SYLVESTER: All right. Dmitry from A&T: What will we do if the services will be blocked like it was with some Google services in Russia during the telegram wars? Amazon is a US‑based company, what would you do if the US government makes sanctions for some other country? If you use Amazon, are there any guarantees of data protection in correct operations in the event of the US intelligence?

SANDER BUSKENS: There is a lot of questions at one time. We do always intend to have the fallback environment on premise.

WILLIAM SYLVESTER: Okay.

MARCO HOGEWONING: Can you tell us what your affiliation is? I see that you plan many AWS AS services, but do you still plan to support serving traffic from NCC local instance? How much will it cost to develop the duplicated parts of the system?

SANDER BUSKENS: I can't answer that question at this point in time. We'd need to look into that.

WILLIAM SYLVESTER: Okay. Blake Willis from IBrowse. Much of AWS's EU infrastructure is in the United Kingdom. Is any impact of Brexit on personal data moving outside of the European Union? I think RIPE looked into the impacts of Brexit in regards to where the data would be hosted in the Cloud infrastructure. Hang on, Athina is ‑‑ wants to answer this one.

ATHINA FRAGKOULI: Hello, can you hear me? We are following the developments around Brexit and any changes this will have in the legal framework around the data protection regulation. Once this legal framework is there, we will take all appropriate measures to comply with it. Thank you.

WILLIAM SYLVESTER: Thank you. All right. Harry Cross: Does the NCC foresee any staff reductions or redundancies after the move is completed? Kaveh wants to answer this question.

KAVEH RANJBAR: So, I just wanted to clarify, definitely not. The idea is we want to basically use the resources that we have and, as you have seen, we have quite a backup for ‑‑ not only for WHOIS but a lot of our services, and we want to make sure that basically we focus on what we are best at, delivering as much value as we can for our members ‑ in following up commitments with data centres and spending a lot of resources on changing disks and maintaining hardware and things like that. Of course with the limit that is clarified for the service, so we will still keep some internal stuff but we will try, especially for expansion or addition of availability, we will try our best to use the Cloud, but by no means it means that we won't reuse the staff. It's just focusing our resources more on our main purpose. Thank you.

WILLIAM SYLVESTER: Great. Marco is with Seeweb: Robert Scheck: Why was AWS chosen and why is a single Cloud provider situation only being created? It feels like as many SAS pictures of AWS reviews rather than being Cloud and vendor mutual, not speaking about mobile Cloud providers in parallel.

SANDER BUSKENS: We chose AWS mainly for technical and organisational reasons. But I guess we're open to this.

WILLIAM SYLVESTER: Dmitry was asking again: Who decides to use Amazon and may at first must be voted by the GM? I think it's should it be voted by the GM?

SANDER BUSKENS: I can't really say.

WILLIAM SYLVESTER: Kaveh is going to take this one.

KAVEH RANJBAR: Thank you very much again. I think it's good to clarify that our aim is not to definitely always use a single provider, it always depends on the service, now that I see this question comes up in multiple times in different situations, and I had a chat with my colleague as well we are going to publish an article soon about each service, because this doesn't mean that we move the whole RIPE NCC to the Cloud that want a strategy for everything. We have different services with different availability requirements and depending on the service, that's in RIPE Atlas, it would be very different from RIPE WHOIS or RPKI repository, so we will look into the different backup scenarios that we would have, multiple clouds are not out of the question but it really depends on what we gain on an availability and other requirements that we have. We have a clear internal process for that, actually there is a team or ‑‑ in our system which is called Cloud centre of excellence which has a process for any service that wants to be moved to or is looking into moving to the Cloud. We would check availability, we would check dependence requirements, we would see, we would evaluate vendor login which we really try to avoid but if it happens we would really try to minimise it if we have to do is to we can easily migrate. And of course other impacts including financial and usage predictions and all of that. So that's all in place.

Regarding the selection at the moment as you saw, this one is basically for a release candidate at the moment and then we are discovering, but the selection process the RIPE has a procurement process which is very clear and also financial limits, commitments that management can make or what needs the board, I'm not super familiar, and Athina can correct me in I'm wrong, but I actually don't think from our governance side any expenditure needs GM, that the board should be able to do that the board is fully informed from the basically inception of this, indeed the board has been updated and they are getting regular updates and giving us direction on how to do that. The board is very much informed and, as said, it's not that RIPE NCC is going to move to Amazon for this service right now. Basically, the primary way to distribute the WHOIS suggestion coming from the NCC is to use Amazon as the primary service provider, but that will of course go through our procurement process which has competitive checks and everything involved. Thank you.

WILLIAM SYLVESTER: Thank you. Right. Any last‑minute questions? I think we're a little over time on this. All right. Seeing none. Let's move on.
.
We're moving on to any other business. Is there anything else we haven't covered at this time that anyone would like to discuss? All right. Well, with that, I think we can try to finish up a little early. Thank you everyone for attending and I appreciate all of your input. We'll have some new work items and discussion that are ongoing on the mailing list so we encourage everyone to participate within that and of course bringing all of your new proposals to the mailing list.
.
Within that, as we mentioned at the beginning, we are looking for another co‑chair. We have a few things coming up soon, so look for that on the mailing list as well, which we hope to get out in November.
.
But other than that, I think have a great rest of the day and hopefully we'll see everyone soon, whether it's virtual or in person, and be well.
.
Denis, did you have anything to add?

DENIS WALKER: I think you summed it up nicely there.

WILLIAM SYLVESTER: Great. We'll see everyone soon. Take care.

LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.