RIPE 81

Archives

.
Plenary session
.
RIPE 81
.
27th October 2020
.
1 p.m.


CHAIR: So hello everybody to the afternoon session. I am Wolfgang Tremmel and I am going to chair this session, together with Alireza Vaziri, and you have seen the housekeeping slide and I would like to ask you if you have any questions to the speaker, you can type in the Q&A panel, you can type in the question at any time, so the speaker will see the questions after we finish this talk, and I would also ask you to press the request microphone button only after the speaker has finished, otherwise we would have to basically remove you from the microphone queue. It's like in the live RIPE meeting, when the speaker has finished you go to the microphone queue, but as I said, you can ask the written questions any time.

With that, I would like to hand over to Geoff Huston and he is going to talk about measuring RPKI. Geoff, the floor is yours.

GEOFF HUSTON: Thank you. I will ‑‑ do I really want to share my screen in yes, I do. Let's do this. Let me wander into full screen mode and start this off.

Yes. Hi everyone, my name is Geoff Huston, speaking to you from this strange nighttime void that I find myself in.

We are going to talk about that which is a real mouthful. It actually follows on from Susan's talk this morning but it actually is looking at the effectiveness of all of those ROAs you have been creating trying to understand what happens out there in the net with route origin validation filtering using invalid drops measuring RPKI.

So, let's dive in.

What are we trying to do with routing security? Good question. I have some prepared answers. Maybe, maybe we're really lousy at our job and maybe, you know, fat finger syndrome and all of that and we need to protect the routing system from all kinds of silly things that happen because what we actually find is that most of the leaks is because of a fat finger problem somewhere on a keyboard with some router somewhere and maybe we are just trying to protect the routing system from the humans in the loop and if we all walked away it would be so much better.
There is also this issue that not all of us are nice to each other and, you know, there is this element of hostile attack, and wouldn't it be good if the routing system could identify the evil updates and when it found that evil update, it would have, of course, filter it out and get rid of it.

Not everyone pays attention to addresses and some folk just make them up. Others continue to leak 10/8 and all other sorts of bogus address prefixes. Maybe we should just stop that nonsense because, you know, that's silly.

The same with autonomous system numbers on any day there are at least ten different networks that camp out on AS1 because why not? AS2, AS3.

Maybe we want to try and go a bit further and actually prevent all forms of synthetic routes, whatever that is. So the idea of this protocol naturally propagates good routes and routers can just make it up, and maybe we want to figure out the made up ones as different from the normally propagated ones, okay.

How about a really tough one? Unauthorised route withdrawal? Whatever that is.

And last but not least, protect the users from being directed to bad places.

Now, I actually think that that list is unachievable. Protocol‑wise and polls‑wise, accuracy is beyond the capabilities of BGP and any viable BGP control mechanism. Footnote: BGP is not a deterministic protocol. When you actually stress it out, it's actually a negotiation between sender and receiver, and when those policies are different, the outcome it negotiates can differ each time it tries. So, there is no right answer in BGP. There is a range of answers that are approximations in trying to match import and export policies, and, quite frankly, that's about as good as you get.

Now, route origin validation is actually designed to prevent BGP speakers from learning/preferring unauthorised prefixes, nothing nothing to do with polls.

So the intent of not preferring these unauthorised routes to prefixes is to prevent users from going to a bad place, to a prefix where the route is not authorised.

So, if we go back to that original list, some of the former things are doable, but not that important. Others are not even possible, but really what we're on about is actually trying to protect the poor user from going to the bad place.

So, the primary purpose is a user purpose. So, our objective is not to try and, you know, count the number of ROAs and count the number of routes and count this and count that. Which is all good and fine, but you know... this is a user‑centric measurement and, realistically, what we're trying to say is, when it's a bad place you are going to, how many users go there anyway? And what we'd like to do is keep this going as a long‑term measurement across the entire Internet for as long as it's possible.

So, the approach we're looking at is actually trying to measure the effectiveness of route origin validation in blocking the ability to direct users along these bad paths and team that suggests and approach. What we do is, we set up a bad path and set it as the only route, so that you are only going to believe this if you wanted to go don't care about RoV, going there anyway. And then try and direct a really large number of users to try and reach a server that can only see behind this bad path. And just to make sure we understand what's going on, set up a control measurement which is a valid path that goes to much the same point and measure and compare.

So how do we do this?
.
We used Krill, and I must admit, a big shout‑out to folk at NLnet Labs and Krill, it just works. Fantastic. We're on a Debian platform, if that matters, I don't think it does, but pull it out of the box, make, run, done! Brilliant.


We're actually using our own delegated RPKI repository, none of this hosted stuff. Our mistakes, our destiny. So, yes, this was an essential part of the experiment.

By the way, as soon as you run your own RPKI repository, every other player has to come and poll you. And so you can count them, and this is the number of unique IP addresses per day performing a fetch from our repository. And, you know, if things started in around March, around 800 unique addresses, so between v4 and v6, somewhere between I don't know, 400 and 800 individuals polling us, the red is RIR sync, don't use T; the blue is RRDP. And around about May I think there was a software release that changed for something and everyone started using RRDP, wonderful stuff. But what's this weekday/weekend profile? What the hell are you doing? I'm like, I'm sorry, but if you are running a secure system, if you are trying to run a security watchdog, the bad things don't take a holiday on weekends. I promise you. It's a 24‑hour‑a‑day, seven‑day‑a‑week kind of problem, but there is a few folk, 70 or 80 or so, that only visit us on weekdays, and the weekends are asleep at the wheel.

Totally curious, have no explanation but I find it really odd.

Anyway, we're setting up our prefix and our autonomous number system delegated in an RPKI repository and what we decided to do was, rather than having a permanently bad and a permanently good, we thought we'd actually use a control in time. So, sometimes it's good, sometimes it's bad. It's the same place. It's the same prefix and AS that the covering ROA keeps on changing. And in Krill that's really easy, those on the crontab entries, you can split ‑ good, bad, good, bad. We are currently four locations for an Anycast prefix. More is better. If you can help us with this, love to hear you from. We're in Los Angeles us, Frankfurt, Singapore and in Brazil. And we're doing one by one blots.

Love to do it in v6 as well as v4, but when you are sort of scabbing away at low‑cost virtual hosting services that do BGP and Anycast, you got to take what you get and what you get is v4 only. Sad fact.

Last but not least, load up a unique URL and set it into a measurement script. Everyone go here! Now again what we do is the DNS component and the URL it uses.HTTPS and a unique DNS label. Why? There is no caching, there is no middleware, you have to come and visit us if you want to get this URL, there is no choice.

So you are trying to eliminate a whole bunch of noise that comes into this measurement that otherwise intrudes because of the various forms of caching out there.

And last but not least, feed this into an online ad because, you know, ads, you know...
.
That's part of the large system we have been using for quite some years now. This is another way of doing it, then just clicked and analysed the data.

We also collect what the user thought they did so that we understand zombies and stalkers, of which there are a lot on the Internet, are filtered out from this data feed.

So, up, down, up, down, up, down. How frequently should we go up and down? How long does it take to learn that a previously good route is now a bad route? And how long for a bad route to learn to be a good route? We change in our publication point and we just revoke the old certificates and publish the new ones. Shouldn't that affect routing? Well, eventually, because everyone else, all those 1,800 people back there, remember them, have to come along and see that things have changed. And if you read the copious amount of RFCs ‑ and there are a lot of RFCs about RPKI; I know because I wrote a few and they weren't short and there were a lot more besides that ‑ there is actually no standard. And one of the few things I know is that programmers should never be creative, we're lousy at it, because, given the choice, we never coalesce to one value.

And, oddly enough, this requery interval, we actually see for all these people are visitors, three major peaks. There are a number of folk that use software that banks like crazy. I think it's route nature to be perfectly frank but the Routinator folk will tell me that use a two‑minute interval. There is another bunch of software out that uses a 10‑minute interval. Bang, bang, bang. And there is folk out there with one hour. Bang, go to sleep, oh, bang, go to sleep.

And there is some stuff in between. I'm like, what's this stuff in between? So, I'll come back to the stuff in between, but those intervals show us one thing: that, within two hours in a cumulative distribution, around about 75% of the clients have performed a requery. So, if you are trying to flip these states faster than one hour, people are going to not see the flip. Once you go beyond an hour, it rises slowly but within two hours, which is what I plotted at the end‑point, we're up to 75% of these clients. Good enough. If you are slower than 2 hours, left the bus.

Why that funny noise in between?
.
Oddly enough, not every RPKI publication point is as fast as every other and there is this really helpful graph from Wikimedia who are actually publishing the time it takes to visit each of the sites in the distributed RPKI repository datasets. So one it stakes 27 minutes at one point to visit that site, and the sweep is not in parallel; It's serial. So, at times, there is this extraordinary lag that gets imposed because some people's RPKI repositories seem to be built out of honey trap software, God knows why. But this is the lag that some sites are clear than other sites and that causes the entire sweep process to go sluggish. So even if you have got this two‑minute trigger, if it takes seven minutes to pass through the lot, 10 minutes, 15 minutes, that's as fast as it goes.
.
So here is our encrypting. Good to bad, good to bad, Sundays are bad, Mondays are good, Tuesday ‑ half of each, Wednesday are bad, etc. You can see the peers of RIS actually showing a change in connectivity, and if you do it in BGP Play, then, you know, you are doing fine, and then all of a sudden, withdrawals happen, doing fine, then all of a sudden more withdrawals happen. So there is this constant oscillation going on inside this entire process.

So now we look at the fetch rate across the week. And this is every second for all of a seven‑day period. It didn't start on a Sunday; it started on a Monday, so this is a Tuesday.

Half good, half bad.

So, there is good, about near 5% of the time folk get when it's good. When it changes to bad, we drop down below 90% most of the time. Good, bad, good, bad, good, bad over a seven day window.

But there are some shoulders. As soon as we flip the state, the routing system doesn't see it and when we flip back to good, if you actually look really closely you'll actually see a build up. So let's look really closely.

Currently, when we flip from good to bad, folks still get it when it's bad for about 30 and only after 30 minutes is there this change in the reachability where we drop down to 85% or so.

So because not everyone polls like a maniac, which is probably a good thing, and it actually takes some time in BGP to actually feed this through our filter sets and feed it into BGP, even though the convergence time for BGP changes for the entire Internet is about 70 seconds, the response time from RPKI is about 30 minutes.

So if you think this is a really good DDoS defence think again, because your systems would have melted before RPKI comes to the rescue. So, don't drop your remote trigger blackhole communities, they are what you need for that kind of defence. RPKI is something completely different. It's much slower to react.

Bad to good. Quicker. But by quicker, I still mean five minutes, which in terms of BGP is still pretty slow. The reason why is, of course, the system is dependent on the the first transit to awake not the last one, so, normally good happens faster than bad. But bad is slow.

So, 2, 10, 60 minutes. Two minutes, I think, is a bit like Goldilocks; it's just wrong, it's just too fast. Thrashes. Right now, there are only 1800 queriers. What happens when all 60,000 ASs run their own software? At 2 minutes, that's a huge amount of traffic. 60 minutes, it's geological. Frankly, I think the 10 minute timer is kind of a decent compromise. We're not trying to be a DDoS responder, we're not trying to be ultra fast, we're not trying to melt the system by being really responsive, but at the same time, we're not trying to implement same fortnight service. We're actually trying to condense the time period just a little.

Here is the basic answer:
.
Right now, around 70% of users can't get to it when it's bad for the entire world. Now, it took v6 20 years to get to 17%. It's taken route origin validation filtering a little under a year, that's really good. That's really fast But why?
.
Well, first let's look at where. This is the state of the world in July 2020, July of this year. Green is good. Red is bad. Red says no filtering whatsoever. Green says, oh, my God, I can't get there when it's bad quite reliably and, in the middle says some ISPs in the country do, some don't.

By October, Australia changed; Africa ‑ Namibia ‑ got worse. But time is running out. So why? Why? Because some networks do it. And they are two that team Dodo filtering, and it's pretty obvious. The rates are up at 96, 97%. But some sit behind transits that do it. They don't have to to anything because transits are filtering. Job done. The basic objective of not getting steered to the bad place simply happens because my transits dropped that route, I can't get there. Good enough!
.
That's an interesting point really. By the way we also see some folk turned it on, you notice Australia went from red to sort of yellow‑ish, that's because Telstra turned it on. The tool that I have there, this tool shows you not only country, but networks. Networks do turn it on, Telstra turned it on in the last week of July. It's pretty obvious that they did it. Local in Europe, the Scandis do it; the Germans don't; UK, no; Italy, no. You can see for yourself the dark reds are certainly folk that have yet to do much here.

The ones that are coloured not red either do it or they can't get to Frankfurt or any other Anycast locations because their transits do it. Either way, they are being covered to some extent by their transits.

Where do we go from here because there is a lot that we can keep on measuring. I'd actually like more Anycast servers and it's really lightweight, you just a serve a one by one give but it's got to work off the same prefix, so it's just adding more into this Cloud. More Anycast servers, more transit diversity would increase the accuracy of this measurement.

It's actually interesting to try and find out who is actually doing the dropping and who is behind the dropping. In other words, take and identify the active networks that are doing route origin validation filtering and the ones that are piggy‑backing for the ride. Traceroute might be able to do that and certainly doing selective traceroute might help us here.

The other way of doing it is to take it one step further and sift through the incredible noise that is BGP routing updates. Good luck ‑‑ and don't forget, we're generating six every weeks, up, down, up, down, up, down, and work out from the updates where the route cause is of the withdrawal, because that's the point where you are getting RoV filtering.

So, in some ways, we do indeed, I think, work through those updates and actually do, from RIS data, some degree of analysis of route causes who is doing it.

I keep on hearing everyone should do this, and I keep on wondering why, because, quite frankly, I don't think you need to. There is a real difference between a stub network and a transit network; does every AS need to operate RoV filtering? If the transits do it, what's the point? What's the minimal set where we can actually get decent covering without actually trying to push everyone into doing something that is indeed one more thing to go wrong when it does go wrong. And so the marginal benefit of stubs, I'm not there.

Transits and IXs do it. It's a really good change. But others, not so sure.

There was an interesting incident with Telstra, a few weeks back, in they are doing ingress filtering but they announced routes that actually had different ROAs from what they were doing. And the issue is, should you be doing ingress filtering as well as egress? Sorry, should you be doing egress as well as ingress, ins and outs, should everybody filter what they say as distinct from what they learn? In some ways, this is an interesting question. Is it about protecting somebody from your own fat fingers or protecting yourself from the fat fingers of others?
.
It's sort of food for thought and in some ways it's probably worth doing both ingress and egress, but maybe there is a different way of looking at it. I actually think that the prefix view of routed origination doesn't have that one element of polls that actually might be helpful for everyone else. Because if I'm a network, I can actually say what is the maximum number or extent of prefixes I'm going to originate? It's a bit almost like RPSL. I say what I plan to do and if you have seen me doing anything more than that what is that attestation? Filtering it out, it's rubbish. In some ways, adding that AS attestation may actually give us a bigger coverage because, if I attest to an AS, it doesn't matter if the prefix has a ROA or not; you can still say, Geoff, you have gone one step too far, irrespective. So we're not waiting for every prefix to get signed if we actually had some AS level attestation as to the maximum announcement set.

So we might want to think about that, because it might be a useful thing to do to actually stop these issues such as Telstra's problem, and, in truth, learn a lot of the space a lot of the time. Most of our routing incidents are like that.

And there are some deeper questions, you know: What are we trying to protect people from and from whom? If this really is a user protection mechanism, to stop people being taken to the bad place, then maybe this really is transits and exchanges and stubs, maybe you should just do egress filtering because if the transits doing ingress for you, you are not really adding much and you are just doing more work for no real gain.

That's my time. I have got five minutes left for questions, I'll stop sharing in a second. There is an URL down there if you don't have time to have your question asked, go read this, because those are the questions I have had so far and it's a lengthy document because the answers are probably bigger than you ever wanted.

Thank you.


WOLFGANG TREMMEL: Thank you, Geoff. I have one person at the microphone queue. Gordon, you have the microphone.

AUDIENCE SPEAKER: Hi Geoff. Interesting presentation as always. So, my question is: What do you think ‑‑

WOLFGANG TREMMEL: Can you please say your name and affiliation.

AUDIENCE SPEAKER: I am Gordon, and I am affiliated with the Swedish Royal Institute of Technology. So, I have one question for you. Right now, your method only detects if a network in the middle completely drops the RPKI invalid route. Do you think this method could be refined to detect de‑preferencing?

GEOFF HUSTON: I actually disagree with de‑preferencing. If something is bad, then why are you going there? It's bad. And de‑preferencing seems to be this kind of thing that goes well it's bad, but it's not bad. It's the same as in DNSSEC 35% of the world's to say to recursive resolver validation, when the answer comes back, server fail, they go, oh, I'll try a resolver that doesn't to validation and go there anyway. That's just crazy. And de‑preffing bad is not getting rid of that; it's still bad and you are still there. So lousy idea, don't do it.

AUDIENCE SPEAKER: A follow‑up question, if I may: So why ‑‑

WOLFGANG TREMMEL: There is nobody else at the microphone queue.

AUDIENCE SPEAKER: Why do you think transits sometimes do de‑preffing instead of dropping it fully? Are they just afraid to commit to RPKI?

GEOFF HUSTON: Yeah, they are chicken. This is a complex system. There is a huge amount that can't go wrong. Dropping routes means dropping servers, so every, you know, marketing manager sits there and goes, oh, my God, oh, my God, what if this goes wrong? Will we get hoisted and pilloried on Slash Dot or somewhere? And, yeah, maybe, but at the same time, if you start taking people into an attack when you knew it was bad, I think that's an even worse sin to commit than just, you know, than not doing it. So, no, I think it's a lousy way out. If it's bad, it's bad.

The only thing we are in security and validation is not well this is 50% bad. No, what security tells you is what's good. Really good. And what we actually infer from that is that when we know that the entirety of what a prefix is saying it's going to originate is attested by the prefix as part of this set, if what we see is in the other set, the anti set, it's bad. It's not partially bad. It's not a bit bad object Wednesday. It's just bad. Drop it.

WOLFGANG TREMMEL: Thank you. I have one question in writing from Patrik Tarpey of Ofcom, and he asks: What is the optimum implementation RPKI if transit providers and Internet exchanges are only parties required to sign?

GEOFF HUSTON: There is a difference between signing and filtering. Let's understand the difference here. Because I'm not talking about generating the credentials, signing prefixes. I'm talking about what happens in the routing system with those signed prefixes. And what I'm saying is, that transits and exchanges have the most leverage in getting rid of obviously bad advertisements if they do RoV filtering, if they do the drop invalids. Stubs don't need to do it if transits and exchanges do it for you.

And so it's a real food‑chain issue. The other thing about this, in theory, is that the exchanges and transits get all the big bulks. They are the folks at the top of this food chain, in some ways because they are providing services down to others in the network hierarchy, then in some ways if they do it, they have the maximum leverage and the maximum effect for their efforts. So, my view is, if you are providing transit, you know, this is the kind of step you should be looking at. If you are a stub, egress filtering is a damn fine idea, but ingress, not so convinced.

WOLFGANG TREMMEL: Thank you, Geoff. I am cutting off the microphone queues here. Thanks for your presentation, and see you around and we'll see you hopefully at a real physical RIPE meeting in the future. Thank you.

GEOFF HUSTON: One of these days. Thank you very much.

(Virtual applause)

ALIREZA VAZIRI: All right. So we do have less than 15 minutes on this session. Our next presenter is Alun Davies. Alun, the floor is yours.

ALUN DAVIES: Hi. Everybody can hear me right?
.
Okay. Let me just bring up my slides in a moment.

Okay. This all looks in order. Right, so. Apologies. Let me just switch to... yes, okay.

My name is Alun Davies and I'm from the RIPE NCC and I just want to take a few minutes to talk about some changes that are coming up for RIPE Labs. If you have never heard of RIPE Labs, well hopefully most of you have heard of RIPE Labs and hopefully most of you have visited the website and read some of our articles. But if you haven't, not to worry, now is a great time to start.

Just over the past week or so, we have put up a lot of content on RIPE Labs that will help you to get background information on a lot of the talks and events that have taking place during this RIPE meeting. So, do go take a look.

Okay, so the main thing I want to talk about is some of the changes that are coming up soon for RIPE Labs. I also want to start by giving a bit of background and talking a little bit about the idea behind RIPE Labs.

So, this idea, this RIPE Labs idea came up way back in 2009, and back then Robert Kisteleki, who is our manager of research and development, came up with this thought that what we needed at the RIPE NCC was a place where we could talk to the community about all these new ideas and prototypes that we wanted to try out, and the community would be able to tell us exactly what they thought of them in this relatively informal way.

And as that idea evolved a little bit, it soon became clear to everyone who was working on this, that everybody in the community should be able to use RIPE Labs in this way, to get feedback on the work they were doing.

So that was the idea, and it was first presented to the community on ‑‑ at a regional meeting in Moscow that same year on a broken laptop, no less, and everybody really liked the idea and a team got together on picking the right platform for it, and it was officially launched at RIPE 59, that's 22 RIPE meetings ago. At the helm back then, right from the start, was Mirjam Kuhne, who took on this new role of community building, and her task was to be the key person behind the success of RIPE Labs. So, yeah, that's where it all started.

Now, we're 11 years and 1,200 articles, give or take a few, later on, and RIPE Labs, under Mirjam's guiding hand, has really become this trusted sort of information on all things RIPE. We built up a really strong readership of people all across the community who really do look to RIPE Labs to stay up to date on the latest developments with RIPE and, you know, the latest things coming out of the RIPE NCC.

So, yeah, that's ‑‑ it's been very successful in that regard and now, as we come to the end of 2020, there are some changes to come. So I just want to go through a couple of these right now.

The first change that you'll all see soon enough is that RIPE Labs is going through a major redesign, we have been working for the last year with a company called Mangolab, who have been helping us to come up with a whole new look for RIPE Labs. As you see from the little preview here, the new RIPE Labs will have a much more modern and up‑to‑date look. Stylistically speaking, the aim here has been to make RIPE Labs more recognisable and more appealing to new audiences, and, as much as possible, we have really tried to make sure that the content that's available to you on RIPE Labs is laid out in this clean and as logical a way as possible. So you can you can find what you are looking for quickly and without any fuss.

On the other side of all this, another important part of the redesign is that we wanted to bring back to life and kind of recapture part of that original vision behind Labs. Because although RIPE Labs is a really, you know, useful source of information to a lot of people, it hasn't always necessarily worked that well as a feedback mechanism for the community in the way that it was originally supposed to. So, with the redesign, we have been trying to remedy this. Partly we have taken steps to make sure that when you come to RIPE Labs as a reader, your attention will be drawn to those elements of the site that invite engagement and discussion.

In addition to that, we have also really put a lot of work into making authors more visible and improving the author profiles.

The idea behind that is that basically if you come to RIPE Labs and you contribute your knowledge or expertise there, you'll be recognised for doing so. So that's part of the aim. But in addition to this, it's also kind of cool to make sure that when people are reading RIPE Labs, the people who wrote the content they are reading are really visible to them so that they can engage with them directly, and that's really part of the aim here, is to create this feel of this real life hub of community engagement on RIPE Labs.

So, while there are a bunch of other things about the redesign that I could get into, that's all I want to say for now. We expect this launch somewhere in the coming months, so do keep an eye out for that.

Moving on, though, the other big change is we have a new RIPE Labs editor. Just to put this in context a little bit. Over the years, as you know RIPE Labs has grown and grown, the work we do at the RIPE NCC around community‑building has also grown, and, in the midst of all this, RIPE Labs has really remained this important platform that we use to help build a more active and informed community.

Mirjam has been involved in a lot of these community‑building activities over the years, and she has always ‑‑ one of the main responsibilities at the RIPE NCC has always been to make sure that RIPE Labs was ‑‑ you know, stayed successful and kept producing the best possible content.

Now, obviously with Mirjam moving on to new challenges as the RIPE Chair, somebody else needs to step up to take the lead of RIPE Labs. So, without any further ado, hello, my name is Alun, I am the new RIPE Labs editor. I come from quite an academic background, starting off in the philosophy of languages as an undergraduate, then moving off into logic and linguistics during my postgraduate work. Then, after all that, I took a step into the computer science and went and did a master's there where I got really interested in machine learning and data science. Then, some years ago, not that long ago, me and my family made the move to Amsterdam, and four years ago I started working for the RIPE NCC, and, since then, I have had the chance to go to a lot of RIPE meetings and other equally cool and interesting events and I have really come to enjoy and value the work that I get to do with the community, and yeah, I know I'm not nearly so familiar a face as Mirjam is, but I do hope to get to know a lot of you much better in the coming years.

Moving forward:
Now is a really good time to contribute to RIPE Labs with all these changes that are happening. One that I think that's really important to me is that we're at this point where the RIPE community is growing and growing and more and more people from different backgrounds are getting involved, and as part of, like, the vision for RIPE Labs moving forward, part of the message I want to get across is, whether you have been in the RIPE community from the beginning or you are just joining now, we want to hear from you, and part of what I really want to do with RIPE Labs as we move forward is to make this for a hub where all users, wherever your background, wherever you come from, can talk about those important ideas that are going to shape the future of the Internet.

So, yeah, you can get involved in RIPE Labs as always by reaching out to us at labs [at] ripe [dot] net, and we invite anyone who has something to say that you think is important about the Internet, to get in touch and start talking to us about how we can get that idea out to the community.

So, yeah, with all that said, I would just like to say thanks to everyone who has ever contributed to RIPE Labs in the past, and I'd also like to say thanks in advance to anybody who is going to contribute to RIPE Labs in the years to come.

So, thank you very much. That's my talk.
(Virtual applause)

ALIREZA VAZIRI: Any comment or question? Okay, I don't see anyone at the microphone queue.

Thank you very much and give him a virtual clap.
(Virtual applause)

ALIREZA VAZIRI: Okay, so this session has got to the end and we do have a few minutes of coffee break, the next session will start at two. Thank you.

(Coffee break)