RIPE 81

Archives

Plenary session
RIPE 81
2 p.m.
27 October 2020.


CHAIR: Good afternoon, this is the session for the RIPE Plenary. I have a few small announcements to make. Brian Nisbet and I will be chairing this session. Welcome back. Please don't forget to rate the talks. It's on the programme. Please also don't forgot that if you want to nominate yourself or somebody else for the Programme Committee, the nominations are open until 3:30 this afternoon. So that's one‑and‑a‑half hours left. And without further ado, I would like to present you the first presenter of this session, which is Nico Schottelius. The floor is yours!

NICO SCHOTTELIUS: So, here we are. It's really cool to be here, I love RIPE conferences and I have to say we do some karaoke before with the music, but let's get back to a little bit more technical stuff.

Today, I would like to talk about high speed NAT64 with P4. And you know what, let's just start a little bit what is P4 actually? You know what, I'll give you actually an option for other language maybe. I'm getting used to the Meetecho right now and you have the first poll here running.

So, before you know the poll is running, but just before let me talk a little bit about the motivation to do the whole project here that I'm talking about. And most of you probably already know, that we are running short on IPv4 and the last time actually gave this talk, this is quite interesting for me, you can see here we have 0.39 /8s left worldwide in IPv4 space, or 6‑and‑a‑half million IPv4 addresses. The last time I gave this speech about IPv4 and NAT64, I wrote 10 million so we almost lost 40% of IPv4 space since the last time I gave this talk.

When I checked, we had around one third of track traffic that goes to Google is IPv6 traffic now. As you know it always jumps up and down a little bit depending on whether we have a pandemic and whether or not we are on a weekend. Generally speaking, IPv6 is coming and we need transition technologies and NAT64 is quite an easy one. You also know Met T, Met E, they are also very valid mechanism, I personally prefer NAT64 but everybody has their own preference.

So, when I am talking about high speed NAT64, the question is like how fast? And over the session here I will try to convey to you that fast can be quite fast. So, a short review for those who are not daily looking at their packets and enjoying like all the headers and everything.

There are a couple of intrinsic problems when you want to convert IPv6 to IPv4 or vice versa; that is, you know, basically they are incompatible and this is not like a theoretical thing, there is not an IPv 10 that can magically solve both of them. Because, let's have a look at this. If you have a look here at the IPv4 header, and if you try to squeeze in just just those two IPv6 addresses, you will see that like, if you want to do that, well, we need to get rid of most of the IPv4 header. So, IPv 10, merging both together doesn't work.

But what do we actually need to do? So you actually have an ethernet frame and, in that frame, you already say like the next protocol is, by the way, IPv4 or IPv6. So when you do translations you actually need not only to change the protocol itself but you also need to change ethernet frames.

Address sizes, I think are probably clear for everybody. The format is a bit different, but not too different for the general case.

And checksums. Do you see the checksum in the IPv6 header? If I look closely I can't see it because it isn't there. We actually have to drop something or we have to create something when we convert one or the other direction.

An alternative to what I'm showing to you, NAT64, is to just do high‑level proxy, you can do IPv4, IPv6 proxies, this is all, like, also very valid approaches. However, one of the reasons to go with NAT64 is you don't care about the actual protocol and you have maybe a fixed set of machines that you need to translate, and it's also very, very scaleable.

So, coming back to NAT64. What do we actually do?
.
As I said, if you are not looking at this stuff every day, this might be looking complicated, but it's actually not complicated. The first thing if you want to translate from v6 to v4 or vice versa is you change the ethernet header. You say, like, okay, the next packet is v4, or v6, then you change all the IPv4 and IPv6 headers, and then, now comes the interesting part, we actually have to change part of the TCP, UDP, ICMP, or ICMP6 headers, because this is where the checksums are located in the IPv6 case, they are not in the IP header any more. So three stages that we basically could have a look at.

So, P4, before I dive into this, let me have a quick poll, like, of who of you has actually used P4 already? And I see ‑‑ so, I started a poll for 60 seconds, I will keep it running and come back to it later. I really wonder like who of you has actually already used P4 or heard about it or you don't have any clue what P4 is? This will help me a little bit on how much I explain about P4.

While we are waiting for the result, I can already tell you a little bit about P4, and that is, P4 is nicely protocol and vendor independent most of the time, so, P4 is, generally speaking, a very nice programming languages for networks. Let me talk more about this in about ten seconds, I am really curious who has used it, of you. Who knows it.

So, we have 7% have used it. Pretty cool. Know it: 31%, roughly one third. And you have no clue what P4 is, is 61%. Fair enough. That is fine with me, and I have been there some time ago too.

So, I will jump to the second slide first. So the P4 language is protocol‑independent and the generic idea of P4 is that you have an ingress match action pipeline, so basically you get a packet into your fabric, it might be a switch, this might be router or anything that basically gets stuff in and gets stuff out. It can even be just a PC. Then you have some kind of logic, and this is the interesting part, where you can say like, what are you going to do with it? And then again, you have very simple match action logic on the pipe ‑‑ on the egress pipeline. So you can basically process a lot of things actually in parallel in those different pipelines, and the framework itself gives you some help on like parsing the actual protocols ‑‑ this is actually something I almost forgot. Before you can actually process a packet in P4, you actually need to parse it. So, generally speaking, a packet is just a bunch of bits and bytes, and to be able to handle it, we need to say, well, this part of the packet is actually ethernet, that is IPv4, IPv4 looks like this and so on and so on. So this is what the parser helps us with. After we parse the packet and we say, like, okay, we have a bunch of IPv4 headers, IPv6 headers, maybe ‑‑ no FTP ‑‑ but TCP, for instance, then we know right now we can actually match, and matching generally in P4 happens on tables. So you can think about it a little bit like an escule table where you say, all right, I have an entry here on the left and you do select if you want so. It's not really true in P4 but you can say you do a select on an IP address and if you match, if you find a match in the table, then you do something.

So, on other works like the programmers, like it's like a huge switch statement. All very much simplified.

The very cool thing about P4 is none of this, I mean this you can easily do yourself. The interesting part, I can go back one slide is, P4 is more or less targeted independent and there is a general target that is called BMV2, which is a software emulation of the stack. It's really easy to prototype with it and you can even have support for checksumming the whole payload.

Then there is real hardware, and in this case I use the NetFPGA, it's a four times 10‑gig card. It's basically an FPGA where you compile P4 to PX and then compile to HDL, then you get a bit stream that uploads to the card and the cool thing, and now this is a thing that you need to remember probably one of the rare things to remember of this talk, is, P4 allows to you operate online speed. So if you have a 10‑gig card, a 1‑gig card a 100‑gig or 40‑gig card in practice you can do every operation that is within this P4 spec, you can do at line speed.

This is pretty cool. So once you write your code, you can test it on 1 or 10‑gig, and then port it to 40 or 100‑gig. I am saying porting because while in theory it is all vendor agnostic, in practice there are some bits that need to be changed a little bit. It's not the perfect world yet, but the direction is good and it's also rather easy to write the code.

So, how does NAT64 design look like in P4? In practice, the code I have written works equally well on software em remediation as well as on the NetFPGA. Just with the NetFPGA we had the drawback that it actually doesn't support functions properly. So you need to use defines which like, again if you are a programmer coming from the C background, this is like a kind of macros that replace (something).
.
These are the small nuances that are different between different targets, but hey, it's not such a big problem.

So in general, what we want to do NAT64, we have IPv6 hosts, we have IPv4 posts. We have a P4 switch, this is where the software if if you like runs on. This is where you plug in the cables and it runs on the switch and usually you have another controller. The reason for the controller is this thing here is stateless, you don't change things here in RAM or anything. However, if you want to have a stateful NAT64 translation, the switch cannot do this. What you can do is say well if I don't have a matching table entry, remember it's all about tables in v4, if I don't have a matching table entry, I will ask the controller ‑‑ well I send the packet to the controller and say I don't know what the to do with this packet, please help. Then the controller can create a table entry and say like, well from now on, we have a table entry that matches the source of this IP address and then the switch can continue processing it.

In fact this is actually not much different to what regular routers and Linux firewalls and BSD firewalls are actually doing. You have to create the state and this case the switch and the controller are separated, but the logic is pretty much the same in all kind of operating system kernels.

Right. So, what is ‑‑ what I want to emphasise is, like, if you don't have P4, if you have a regular situation with a router or NAT64 translator, you usually have it a little bit off the network. You don't have a translator in every of your network segments. You have one segment, it might be an old part of your network. And that is all IPv4 only. Great. You have another segment that is IPv6‑only. Great. So you forward all the packets down here or maybe even further and then they are processed and being sent back without P4. With P4, you can just deploy the whole thing into one switch. You can connect IPv4 only and IPv6‑only host to the same fabric. This is pretty amazing. So you don't need to distinguish anything like ‑‑ you need to send everything to some Internet device, you still have somewhere around here or somewhere outside, you will have a controller that can help for the state of cancellations, you can have those. If you don't have those, you don't need a controller per se available.
.
Let's look at the fun facts. When you do NAT64 in hardware, you actually remember the cool protocol defined, like where is the IP address in the IPv4 world? It doesn't exist any more in the IPv6 world. So, in IPv6, you have it built in, we call it NDP, Neighbour Discovery Protocol. It uses multicasts which are depending on the source or destination ‑‑ I don't know any more. Point being, the IPv4 host actually sends out broadcast and says, hey, who has this IP address? And then the IPv4 host sends back and says, hi, I am here at this MAC address.

In the IPv6 world you always have an IP address. That's cool. This is the link local stuff. This is really something that, if you don't you know know much of IPv6 in your heart at the moment, the link local stuff is cool which means you always have connectivity even if there is no IPv6 network actually deployed.

The stuff looks a little bit different in the detail. The problem is ICMP6, and I have to say this is a bit messed up. In Arp, it's all like 6 blocks; in ICMP, this is a chain of 64 bit option blocks so you can have a neighbour advertisement, a link layer option, an option, 1, 2, 3, and this is a bit tricky to parse in hardware because you don't have large buffers; you can't, like, say, I want to like variable length go somewhere and read something somewhere, it doesn't work. You have to say like, at this position I expect a header. Luckily, in most of the cases, the change is actually exactly like shown here, it's ICMP6, it's a neighbour advertisement header and it has the ICMP6 link layer option.

But doing it really properly correctly is a tricky thing in P4. So you will see some facts about this in some years.

A short recap again. Like, how it works with IPv6 to IPv4. That let's say it's an IPv6 host and we sent a packet to 2001: db8: cafe: 192.0.2.2. This is a valid IPv6 address. This is IPv4 embedded here at the end, but in the end this is being parsed just as at the presentation in IPv6. So, this is valid. It looks strange, it is strange, but it's valid. So in a P4 switch, is a table match that says okay, anything that goes into the /96 network will be translated to some IPv4 network. What happens really is then there is a table and there is a function or a define that will be called. And after that, the P4 switch says okay, we deparsed it as in like now it's from ‑‑ before it was in an IPv4 packet, now it's an IPv6 packet. That is how it works inside the switch.

So, you will remember v6 has a much bigger RA address space, IPv4 has much small address space, so direction on translation matters quite a lot. So from an IPv6‑only host we can make a table entry that says map the whole Internet. We don't care like if you make a specific entry for a specific network, let's say we have a 10 /8 network that we want to translate and if we want to address, we can just take a table entry and say like well we just map the whole Internet. We don't even think about which network we are going to address, we always map the whole Internet. Because we can. Because it's just tiny, tiny, tiny bit of the IPv6 space.

However, the other way around, if you want to say, like, okay, we have a table entry for an IPv4‑only host that wants to connect to an IPv6‑only host, cannot have a table entry there saying like, well, 10 /8 is being mapped to the whole IPv6 Internet because it doesn't match. This size is too small. Even if we say we map the whole IPv4 Internet into some sub‑range of the IPv6 space, we can't map the whole IPv6 space. So, you will always see these asymmetry, that table entries in one direction because IPv6 is much more powerful than IPv4. In practice IPv4 will only have like a sub‑part mapped to the IPv6 network.

Right. Then I already mentioned before there is a big difference between stateful and stateful handling. Stateful handling means usually one‑to‑one mappings. You can say you have the 10 /8 in one hardware network, you just map it to a tiny IPv6 network that you have. Stateful mappings are interesting because you can try to hide your IPv4 ‑‑ or you can hide an IPv6 network, IPv6‑only network behind one IPv4 address. Conceptually, this is very similar to what we do with NAT on IPv4 already. You can say you have a lot of hosts, but you hide them behind one IPv4 address. But for this, as I said, you need an active controller there.

On the right, I have shown you the process, how it goes through, but I think ‑‑ yeah ‑‑ it's quite clear, probably, already.

Right. So, one thing is also quite interesting is, well, there are checksums, as I mentioned before, and the checksums in TCP, UDP, ICMP and ICMP6, they include the payload and one interesting thing is, we can't calculate the payload on the NetFPGA because it doesn't have support for calculating the checksum. So I'll repeat here: You have an IPv4 packet with TCP in there. You want to translate it to IPv6. You will have a totally different checksum later because the front header is changed. But we can't calculate the checksum or the payload because we can't do this because it's not allowed or not possible in the net PGA. So the solution here is to calculate the effective difference between the IPv4 headers, the sum of all the headers, all the relevant headers, take the same thing for IPv6 and diff them and apply the diff to the checksum field. It's a bit crazy, but it works.

Here, actually, I have shown the code for UDP, how it works, so, you have the this, you have, it takes a total length minus 20 plus the protocol, which is UDP in this case.

For the v6 sum, you have the v6 source address, the destination address, the payload length and the next header, which always points to UDP. And the new checksum is the old UDP checksum plus the v4 checksum minus v6 checksum, so that's the difference in the end. Complicated, but it works. This is what you see when you do actually translation in hardware.

This would be the code for it. I don't want to bore you too much with this.

You see it's not terribly complicated but it's also not the easiest thing.

So let's come to the interesting part. Drum rolls. It's the result. As I mentioned, I was using 10‑gigabit card and I was comparing with software solutions, specifically with Tayga and with Jool and now comes the interesting part. I measured with multiple times in all directions with TCP and UDP, UDP is unreliable in the version I used unfortunately and it turns out that Tayga actually performed on the machine only with around 3 gigabit per second. Jool, which is a currently module in Linux, performed until around 8.25 gigabits per second, and that is quite impressive, because the NetFPGA while being able to do lines, run lines with 9.29 gigabit is not dramatically faster than Jool. However, and now this is the important takeaway, is, P4, the same code that I wrote, can, in theory, scale up without a lot of changes, just porting to different hardware set, can scale up to 40 gigabit, 100 gigabit, terabit, you name it. That is a real real cool thing about P4.
.
Right. So what's the conclusion?
.
It is possible, and I have done it, like, you can implement NAT64 including stateful NAT64 on P4, it is really surprising how well Jool performs, if you haven't checked out Jool, it's open source, it's a current module, it should be in the mainstream Linux, it hasn't been yet, so you have to compile it. It's really good.

Then two last things. That is, the NetFPGA that I tested is actually not stable. So if you want to go down the P4 way, you have to find stable hardware, but, besides that, the whole P4 set has quite a lot of potential. I can really recommend looking into it.

That is from my side. I hope you enjoyed this talk and I am now open for questions if you have any questions.

BRIAN NISBET: We have ‑‑ first off, thank you very much, Nico. A lot in there, for sure. So, we have a couple of questions that are in by text. Not seeing anyone at the mic so we're going to read this question out.

The first one is from Roger Kruger: Similar to your presentation, would NetFPGA allow me to do example SIT/DC incoming IPv4 traffic to a small set of IPv4 addresses, then translating it towards a v6‑only data centre network behind it?

NICO SCHOTTELIUS: Absolutely, yes, absolutely. Yes, easily.

BRIAN NISBET: I do see we have someone at the mic so we're just ‑‑ I'll ask this other question and then we'll come to Tom at the mike. Interestingly how I see the full name. That was from Bipark. And Alexander Zubkov from Qrator Labs asks: Did you use only FPGA cards or some switches too?

NICO SCHOTTELIUS: The setup was basically two computers and one computer was really just a box for generating packets, and the other box was a NetFPGA, so it was directly connected, no switch in between, just a direct access cable.

BRIAN NISBET: And did you base your solution on some ready P4 stack or wrote your own?

NICO SCHOTTELIUS: (Laughs) nice one. So, there is a P4 stack for NetFPGA, and it is quite a big framework. So, if you want to do anything with NetFPGA, you have to do this because you would have to invest years for this on top of this to be able to use it.

BRIAN NISBET: Okay, we're going to give Tom audio and let him ask his question.

AUDIENCE SPEAKER: Thank you, Nico. A really interesting talk. I am Tom Hill. I'm working on behalf of British Telecom. Out of curiosity really around the general trend of manufacturers adding things to network interface cards and putting them into servers. So, I'm thinking more of the trend of putting FBGAs or systems on network card and you can do all sorts of crazy off‑load things and this looks like it would be actually a really useful useful solution or a problem that be could solved by that. Have you thought about potentially approaching one of these vendors and speaking to them about what you are doing and asking about how they would help support off loading that function maybe with an FPGA on the neck or possibly could you have brought this to CUDA?

NICO SCHOTTELIUS: So, let me come back to CUDA first, because this is more a tricky part. I am not aware of any P4 stake there, so, potentially, I have seen some questions obviously in the chat about GPUs in general. In theory, you know, everything is possible. In practice, you need to have a little bit also look at how it is designed and P4 language is already nicely prepared to do such things. So if you go in CUDA you would probably want to have a P4 stack on top of it, and you know, you also need to have some kind of fee to connect later to the network, so, generally speak in possible in practice I'm not aware of any device that is really suitable for this at the moment.

So the other question, I was in touch with some vendors some time ago. The P4 situation changed a little bit hike how it is being driven or governed, so I'm not fully sure like what is the current state there. Generally speaking, all of the code is open source, it's online available, and I would be actually interested in seeing it more widely being adopted because basically, if you can do this kind of stuff in your network equipment, it really helps you to simplify your network a lot.

BRIAN NISBET: Okay. We're very tight on time, I'm going to let Daniel very quickly and we have a very quick question and answer here and then that will be it, we'll move on to the next talk. Daniel?

DANIEL KARRENBERG: Hello. Nico, thanks, pretty cute solution ‑‑ Daniel Karrenberg, chief scientist at the RIPE NCC ‑‑ pretty cute thing for the checksums. My question is, will that run on all P4 implementations and especially at higher speeds?

NICO SCHOTTELIUS: Yes. Yes. That is really something to emphasise here. And the code I have written is very, very simple and maybe reach out to me like afterwards ‑‑ I should have put an e‑mail here ‑‑ so, the code is written is quite a minimal set of P4 requirements that you need. So it should run on every P4 instance out there.

BRIAN NISBET: Right. Thank you very much, Nico. Very interesting talk. So thank you for this, and again, as people say, you can get in contact with you.

So, moving swiftly along, we have our next talk in this session which is from Robert Kisteleki from the RIPE NCC, and it's about ten years of RIPE Atlas. Has it really been ten years, Robert, or was it, in fact, just the start of March?

ROBERT KISTELEKI: It really has been; it's actually a bit more than ten years as you will see in my presentation. Welcome, everyone.

Indeed, back in the day, as all of these ten years of presentations have ‑‑ back in the day, the RIPE NCC research team, the science team and the R&D team, faced a really big problem ever where trying to measure things, namely that we didn't have well a good enough measurement network that has enough vantage points in all of the interesting networks, so we were wondering what would take to build one? We imagined that we will make something that is a collaborative platform only on active measurements, because that was more interesting than the passive measurements. We imagined that it could be on an unprecedented scale at least you know compared to ten years ago, with the possibility of having a device in every network.

Of course it will never happen, but at least as a target it would be nice.

We didn't want to do it alone, we wanted to rely on the community staying on the network level, so we imagined that we don't want to do Skype measurements, for example. And most importantly, we wanted to have a sustainable platform backed by the NCC because the alternatives were shortlived.
.
So what happened is, roughly, this: Around 2007 and '08 we looked at what existed. PlanetLab still existed. Iplane, Hubble, they don't as far as I know. We studied some visibility studies, code name DAR. I'm not going to tell you what it is, but it does not stand for "Daniel and Robert", that's for sure. We looked at what are the challenges, what is the potential for such a device. We had intern who were looking at physical devices and looked at them and said obviously it's nice, which one is actually nicer? We started on the architectural design and the first public presentation about the idea was by Tony at RIPE 58.

In early 2010, we had the internal go decision; engage, basically. We selected the first hardware, we partnered it up with the local supplier who could bring this hardware to life and we said we are going to launch this at the next RIPE meeting, so that gave us six months.

We had the first external supporter, someone said if you build it, I will love you. I'm not going to tell you who that is, but it was not from the NCC. Somewhere in summer we were switching to high gear, to, as the example shows, there was no escape for us, because the partner said we are going to deliver hardware to you, you do what you want with them.

In August, we hired the first two developers, Victor and Andreas, and they bonded really, really quickly, which was very, very useful because the timeline was really, really tight. So this is August already. Mid‑September we had the first ‑‑ this is eight weeks before launch. Then early October at the Borrel, someone mentioned what if we call it Atlas? I think the day after, that was a Friday, we said RIPE Atlas, that's going to be the name, and over the weekend we converted all the code, all the packages, all the UI, and everything, into using RIPE Atlas instead of DAR. Somewhere in mid‑October, things started to work, brains started to talk to controllers and so on.

And then very late October, the first probes went online, I have probe number 1, but obviously that's just a selection bias. Then, a week before the RIPE meeting we had the first graphs coming up. So that was highly, highly useful. We announced it at RIPE 61, please look at Daniel's talk if you are interested in the details and this is the official survey of the service. Then we went to have a long sleep, because we were tired. But when we came back from the cave, we started working on the user defined measurements. We introduced version 2 probe, sometime later version 3, we had our first milestones of 1,000 probes, anchors came online and so on. There was a big API change and in August 2017, we hit the 10,000 mark, and, jokingly, some people said this is where we came the 10,000 eyes.

Then we started doing the VM anchors together with the community. Some of those VM anchors went live and eventually we got to the software probes and we're here at RIPE database 81.

Where is here?
.
So at the moment, we have a bit more than 11,000 devices. We still have, amazingly, about 1,500, almost 1,500 version 1 and 2 devices. Remember, these are almost ten years old and they did not wear out their flash storages, so that's just amazingly, we expected a couple of years or perhaps a couple of months, but they actually worked really well.

Still the most devices are version 3, but of course, over time, these will be replaced by newer generations.

At any given point, we have about 25 measurements running supplying a lot of the results to the system and up to as well, of course, covering 177 countries and so on.

The most alive probes, so the probe that produced the most uptime, has been up for almost ten years. So, this is 90‑something‑something percent uptime of that probe.
.
We are celebrating the 10 years of RIPE Atlas, and, in order to do that, we do a couple of things. You have heard that RIPE Labs is going to get a new dress, you may have heard that RIPE Stat is going to get a new dress and also RIPE Atlas is joining this party. Just after the RIPE meeting, we plan to introduce a new UI and then evolve from there. We will be introducing birthday presents, so if you host a probe, when your probe reaches its first birthday or the second or third, we will give you a bonus of credits which was going to be proportional to the uptime of the probe, so please keep them up.

Then, we are also going to talk about how we got here. So I am planning to share a couple of stories about how things went well and how they didn't, and other gimmicks as well. Please keep an eye on our announcements. We will have an open house event that's around the 18th, I believe, of November, and then a software probe deployathon at the end of November. Please join these if you can.

Please go to the SpatialChat if you are interested in the details.

And then what happens after the ten years is, some further developments. Of course, it's difficult to look into the orb what's being going to happen, but what we have plans for include making the whole system more useful for the users. We will provide simple and faster access to the data. You probably have heard about the possibility to get to SLAs data in Google B query. If you have not, I will also mention this in the MAT Working Group tomorrow. And, of course, some use cases are easier to support than others, so we plan to do more support for the most recognised use cases that our users mentioned before.

Probe hosts can get a lot more value out of the system. We are going to work on that, better probe dashboard's, for example, and also we need to support our helpers better in order to let them help us and the whole system. So we have to work on better support from sponsors and so on.

And the large thing is going to be to make a wider representation of RIPE Atlas in our service region and beyond, so we are going to have more probes in more networks making the whole system more useful to everybody.

And than, finally, I would like to have a thought out and some acknowledgements to people who have been involved with this whole project and have helped in one way or another, including the current theme, including other supporting themes from the RIPE NCC. Others who somehow contributed to the effort close to our hearts, former colleagues and friends, that are not necessarily with us any more in the development team, but most importantly, I want to give a shout out to the community for actually using Atlas and liking it and questioning it and debating it and sometimes telling us what doesn't work because that's how we can improve. Please keep on doing that, evangelising, translating and everything, that helps us and the community in general.

So, with that, if there are any questions, I am happy to answer.

CHAIR: Are there any questions? Yes.

AUDIENCE SPEAKER: Thank you very much for this impressive presentation. I have a question to the probes. You mentioned that you have up to 10,000 probes now in place in different kind of versions, Version 1, Version 2, also Version 4. I know that Version 4 is the hardware probe. Which kind of metrics on the hardware probe you are measuring and what is the data volume that you get per day? So, because it's important ‑‑ it's interesting to see what is the data volume that you are providing to the central database, right?

ROBERT KISTELEKI: Yes, there are a number of metrics that one can look at. One is the total that we see is around 930 million data points that we collect from the whole network per day. So, that's about 10,000 or 11,000 a second or so. Per probe, that means that we have I believe the most basic setup is something like 3 to 4 kilobits per second, which is you know basically good on a wired network. In terms of what do we look at in terms of the hardware probes? The reason why we had several generations is that some of them turned out to be more difficult to support, in particular the versions we have demand on support, but also price and physical availability and reliability were basically the most important questions.

AUDIENCE SPEAKER: Okay. And now an additional question, so that is this hardware probe that you mentioned, what is this ‑‑ is this measuring inside the network? So it means that this hardware probe is a kind of user that is listening about the performance, collecting all this data?

ROBERT KISTELEKI: I think I would like to refer you to the documentation that we have about what they really do, but in fact, we are just an end‑network device trying to send packets and receive packets and then reporting on how successful they were. That's the shortest I can put it.

AUDIENCE SPEAKER: Okay. Thank you very much.

CHAIR: Okay. Thank you, Robert, for your presentation, and thank you and the entire Atlas team for Atlas.

That's it for this session everybody. We are going to have a short break and we'll be back at 3:00pm. Please, you still have 45 minutes to put in your nomination for the Programme Committee. If you think I'm a new member, I am not experienced enough for this, do not worry, we would like to see some relatively new people there. So please send in your nominations. And of course don't forget to rate the talks, it really helps us to say good programme next time.

Thank you, and see you in a bit.

(Coffee break)