RIPE 83

Daily Archives

RIPE 83
.
DNS Working Group
.
Thursday, 25 November 2021
.
16:00 (UTC + 1)
.


SHANE KERR: Dave is experiences some technical difficulties. Jao, do you want to do this for a bit.

JOAO DAMAS: Okay. Sure, why not?
.
So with the come to the RIPE 83 DNS Working Group. It's a short session, there is a lot of content, so we'll just get started. I have the slides. The agenda bashing. Hopefully you see the list of talks there. There is also a RIPE NCC update that is not showing for some reason. And ‑‑

SHANE KERR: That's on page 2 of the agenda.

JOAO DAMAS: Okay. See. Distribution is good until left hand doesn't know what right‑hand is doing.

Anyway, that's ‑‑ we don't have much room in the agenda, so, hopefully if any of you has any other topics, it will be a short one that we can take on the AOB.

Dave, how are you doing with the audio? The reason we want to let Dave talk is because this is his last session as DNS Working Group Chair, as you can ‑‑ as you know he stand down and we are select ago new co‑chair at this time which is the next item on the agenda, and so we wanted to Dave Chair this session, well, the show must go on I guess.

Anyway we put out a call for co‑shares of the DNS Working Group a while ago. We opened the comment period which closed yesterday. There were two candidates both of them excellent, everybody was happy with them. Some people decided to show support for both which doesn't make our job easier, but it is what it is. That's why we're the bad guys of course, and in the end we got together and we saw somewhat stronger support for Moritz Muller over Brett Carr, so, we reached a consensus, the three of us, that we would this time around pick Moritz Mueller as the DNS co‑chair replacing Dave Knight. A round of a applause for both of them really. Thank you very much to Brett for volunteering.

So, let's go straight into the first talk which, if we go back to the first page, Peter van Dijk talking and Pieter Lexis from PowerDNS, and ‑‑ on PowerDNS.

PIETER LEXIS: Do we have the preloaded slides or should I share my screen?

JOAO DAMAS: I think the RIPE NCC makes the slides appear by magic.
.
PIETER LEXIS: Will I get to see them as well?

JOAO DAMAS: Yes, that's the idea. See how well it works.

PIETER LEXIS: Welcome everybody and good afternoon, today I am here with Peter van Dijk we are both senior PowerDNS engineers, Peter was mostly involved in the technical implementation of this, so, if there are any hard questions, he can probably answer them.

So, what problem are we trying to solve with the proxy V2 protocol? What it is I'll get into later. Usually DNS set‑ups can be fronted by load balancers or proxies. And these backend servers usually require the real client IP address for ACLing, reviews or other purposes. And you might not actually want your proxy or load balancer to do some extensive packet parsing because many of the, as, you know, DNS information is usuallien coded in a packet that requires parsing the whole packet. And it might actually also be relevant for the backends to know what kind of transport was used. For instance, if a backend server does have scripting capabilities but does not support DNS over HTTPS it would still be good to know if the packet came in over DNS over HTTPS.

So there are some existing solutions that both we at PowerDNS have tried and some that have been attempted to be standardised. One is abusing client EDNS client sub‑net between the proxy and the backend, to put this information towards the backend. There is the X proxied For, or XPF, DNS records that would be put into an additional section, and then there is a despoke and private DNS options as well.

Each of them have drawbacks, for EDNS client sub‑net it is of course a very bad idea to squat on existing EDNS options because it doesn't allow you to use that option in the real world then. This also requires you actually parsing and modifying DNS packets that have come in at the proxy before handing them off to a backend. And like said, there is no way to pass actually the ECS information that a recurser sends to the auth that is being proxied to and there is also no transport protocol or other information that can been coded in there.

XPF. One the drawbacks is the draft expired in the IETF. It also requires you to parse and modify DNS packets in flight, and it requires some magic not to break T SIG because it gets put into the packets and that could break TSIG, and you wills have to know TSIG at the proxy side to make sure this doesn't break. And of course, private EDNS options, you need to change all the software in your chain and it can be really hard to debug with your standard tools.

So, the proxy V2 protocol, what is it? It is a binary protocol that prefixes all the proxy data. So there is no encapsulation, it is just a prefix or a bunch of prefix data that is. It can pass IPv4, IPv6 addresses, ports and protocol, so UDP or TCP, it also supports UNIX sockets so you can even transport DNS queries over Unix sockets if you want. It is also very extensible and supports a bunch of arbitrary tag length value fields that you can define yourself. And many different load balancer vendors already support the protocol, and of course it goes very well together with EDNS sub‑client sub‑net from recursors to auths without any interference there.

So where are these headers placed? Here is a relatively simple overview. So a client would send a query towards a proxy or load balancer. This proxy or load balancer will then, as you see on the right, prefix the query it gets and send it off to the server in the backend. This backend server will only accept proxied connections or proxied queries. This means that if for whatever reason, a query would be sent to the backend server, that does not have a proxy header in front of it it will just get a form error or just some other generic error, because the proxy header cannot be parsed. The server would then just send a regular response. There is no trace of the proxy protocol whatsoever there. And then the proxy load balancer can hand off that response back to the client.

On the other end, what you also can do, you could implement a direct server route return, especially for UDP queries, where the backend server would immediately send a response back to the client, although of course you do not get your nice statistics out of your proxy then.

What implementations exist of this protocol? A bunch of load balancers supported, HA proxy 5, but only for TCP. Web servers supported, T TCP only as well. DNS dist 1.5.0 supports this, both TCP and UDP. PowerDNS Authoritative Server 4.6.0, Recurser 4.4.0, and there is road map item for BIND9 to also support incoming proxy protocol to get the real client IP addresses.

If you want some further reading, the slides are up, these are the links. You can read the proposal specifications, it is an RFC style document, Relatively short easy to read. There is a DNS dist on documentation on how to use it and a link to the BIND 9 issue to see and vote on to get it implemented.

If there are any questions here, I will be happy to take them.

JOAO DAMAS: So, Brett is the first one.
.
SPEAKER: (Brett Carr): Hi Peter. This is really cool stuff. Normally we actually do some of this using EDNS client sub‑net currently so the thing that you told us not to do we do it currently. But it was out of necessity because this kind of stuff didn't exist. I suspect we may move to this at some point.

I had a couple of questions really. One you may not know the answer to, but I did notice you mentioning there that F5 is used for TCP only. Do you know why? Do they have plans to also do it in UDP?

PIETER LEXIS: That is ‑‑ I'll try to ignore the echo. Excellent it's gone. So the thing is the proxy protocol is actually specified for TCP where you would put the proxy protocol in front of the TCP SYN packet, and then the whole ‑‑ and then repeat requests, or repeat HTTPS requests would use the same proxy information in the backend. For UDP, we just now prefix every query we send out, but UDP is just not a use case for many big load balancers, you could always ask hem to do it for UDP, but I think the biggest reason is UDP is not really interesting right now. It might become very interesting of course when QUIC becomes a thing.

SPEAKER (BRETT CARR): Secondly, do you have any plans to standardise it in the IETF?

PIETER LEXIS: No we have no plans to standardise this in the IETF, because the standard already exists. It is a document from the H something proxy and it's been implemented in many different types of software as well. The when we wanted to put some of these things in before the IETF with XPF. The biggest problem was well first of all it was inside a DNS packet and you had some mangling, and the other thing was that the privacy advocates were not having it.

SPEAKER: Okay. Thank you.

JOAO DAMAS: Okay, Benno.

BENNO OVEREINDER: Hi. Well thank you Peter for the presentation. You had a list of software products that are road mapping proxy protocol. You can add inbound also for the next year also.

PIETER LEXIS: Okay. Excellent. I'll update the slides.

BENNO OVEREINDER: Thank you. But thank you for the presentation. Excellent work and it's relevant, it's important. The industry users are asking for these features from I think the different software vendors, and also from us. Yeah, thanks.

PIETER LEXIS: Thank you.

JOAO DAMAS: There is a question in the Q&A panel from Dirk Myer:

"Do you see any impact by MDU?"

PIETER LEXIS: Em, we have not thoroughly tested this, although if you look at the protocol description, the header is quite small. And of course the idea is that this is used between a proxy and a backend, which usually live within the same network or at least are relatively close. So the problem shouldn't be a big issue there.

JOAO DAMAS: Okay. Peter was trying to ask ‑‑

PIETER LEXIS: And of course DNS queries are tiny, that also helps.

JOAO DAMAS: Indeed. Okay, no more questions, so thank you very much.

PIETER LEXIS: All right. You're very welcome.

JOAO DAMAS: Next up is Geoff.

GEOFF HUSTON: Do you have my slides?

SHANE KERR: We can do that.

GEOFF HUSTON: This is a talk about measurements of DNSSEC validation using 4096 bit RSA keys. Thank you, off we go again.

RSA is a prime number algorithm. The problem with most of these prime number algorithms is that to break them you needy numeration Andy numeration basically means you need to make the puzzle bigger and bigger and bigger in order to make the numeration harder, and so if you look at the two it the 'nth operation to solve an RSA problem, it takes a relatively large key length in order to get a decent amount of crypto density. So it's not a very dense algorithm and the problem is with quantum computing, assuming that someone instead of building a 3 cubic quantum computer will build a 20 million cubic quantum computer you can crack this stuff relatively quickly. Of course assuming massive quantum computing.

So, you might be concerned about this. 8 hours is actually a real tight time window and if you believe that quantum computers are going to come quickly and you believe that ECDSA elyptical curve might not be what you want you might want to think about longer RSA keys. So we have done some thinking for you, a we actually had a look at some of the longer key lengths and see what it means.

RSA‑4096 is where we actually entered into. Privacy keys is being, 3,000 bytes. The public key, 900 bytes. Fair enough. The algorithms were Serge larger than ECDSA: Security level, certainly better than ECDSA P256 and certainly better than the other one. But they are big and that could be a problem for you performance: It takes an age to sign them. Validation time: We validated across 50,000 records it seemed to be about the same time space, admittedly it was a relatively course test. Signing time was across 500,000 entries. Yes it takes longer to sign.

The real issue though is in the DNSSEC resource record, the valueers and have issues, and although 1245 octets for a response for a DNSKEY is not a problem normally, the incredibly conservative setting for DNS flag day 2020 which advocated a DNS payload of 1232, I don't know why, causes issues here. It causes issues because you are basically truncating this answer, another round‑trip time to get back to truncated then you have got to up to TCP so it's slower and less reliable compared to ECDSA which doesn't have a problem.

Can we see this? Yes, we can. If we look at all the folk we tested. 17% of folk use an UDP buffer size of less than that magic number. So around 17% of users aren't going to actually get this over UDP. They are going to take longer to validate.

So let's test this out. We use and ad campaign to test a large number of folk, around about 10 million a day, on various DNS exercises. This one is a validly signed and invalidly signed using a unique domain name every time so that caching doesn't work and we capture everything back it the at the server to see what happened. We're looking for for example who validate, and that is they ask all the right questions, they get the validly signed object, they don't get the one that's invalidly signed.

There are a lot of who are mixed as well who basically asked the right questions, but when they get a ServFail, they will go off and ask someone else who doesn't validate because, you know, getting an answer is way more important than getting the wrong one.

So, if we look at the first experiment, what we did was with a single 4096 bit key, the answers are actually pretty much the same as with the 1024 bit key. Very slight drop in the number of folk who are validating and the mixed number goes up slightly more indicating that there is a small amount of issues with moving to TCP. What we actually found is 74% of those experiments did it over UDP. 26% did not. 23 .5% did the follow‑up, and 2.5% didn't follow‑up using TCP. Some of them went off to a different resolver and some of them failed completely.

More evidence I think if you need it that that flag day setting was too low.

But the issue is that's not a real test. Most folk use separate zone and key signing keys and we should be really looking at two RSA keys and when you do a key roll you have got to do three. So let's go all the way and look at three.

The DNSKEY response size at three keys gets to be well over 1,500 octets, but even 2 is over 1,500 octets. So we're now talking about UDP fragmentation and/or truncation,
.
The results are a little bit weird. Both of them cause UDP fragmentation and/or trunkation, and you would have thought that as long as you are over 1,500 you get the same answer no matter what. In actual fact that's not the case and what we actually see is a higher drop rate for the larger 3 key case. That's not clear what's going on in that respect.

Maybe there is an internal buffers issue inside the resolver that just doesn't like the large response. Unlikely. Path MTU mismatch. Possible, we use a 1,500 objecting at the time path in this experiment. There might be some lost ICMP. Security policy, receive a buffer in it; a whole bunch of possible explanations, I don't know what's going on.
.
What we do find, which I find a little bit odd as well, is that the TCP failure rate, the failure to complete that follow‑up TCP is a lot, lot higher when the response is 2k. A lot, lot hair than when the response is only 1.7k. So that's the problem. Where do you see the problem? If you life in Portugal, bad news. If you live in Iceland, less than good news. Even the USA. No. Ireland, no. And everyone else listed there, and a few others. Those are the big ones.

What about which ISPs have got this all completely wrong? If you are there, you have got a problem. Look for yourself if you are there go fix something. It's not working properly with large answers.

Should you use it? No. Does it matter? Well that's a really good question. Because we're not trying to protect data for 25 years which is the typical US NIS standard, we're trying to protect this data for the life of the key. Now, if you use DNSSEC keys and give them a life sometime of 12 months, then you need to protect them for 12 months. If you roll every month you need to protect it for a month basically. So the more frequently you roll, the less you have got an issue in trying to up the security level of the keys that you are using. And while RSA 1024 is pretty lousy if you want a ten year lifetime, it's actually quite okay to use a DNSSEC at the moment because understand DNSSEC and if you do, that's not a problem. And so all the folk, and I have heard a lot of them say oh we can't use DNSSEC to secure the web because the keys are too short. The answer is apples and oranges, you are applying the wrong test to this data. The key lifetime is the lifetime of the key itself, not the lifetime of the algorithm to protect it.

There is an issue though about quantum computing. And there are a lot of unknowns there. But one of them is this concept called quantum resilience and while security level through to the 'nth operation to say solve it is a non‑quantum quantification of algorithms, algorithm changes that little bit. And quantum resilience is actually a subtly different concept. An RSA with longer keys is thought, is thought, we don't know yet, to have more quantum resilience than say the equivalent elliptical curve algorithms. But I'll go back to that other slide: How long do you need to protect your data? For the length of the key roll. The longer you keep your key static and the less frequently you roll them the more you have to worry about this. The more frequently you regularly roll keys, the less you have a problem with this kind of quantum consideration. And that is the end of my presentation.

Questions.

DAVE KNIGHT: Thanks for that Geoff. Do we have any questions?

GEOFF HUSTON: Can I go back to sleep now?

DAVE KNIGHT: Yes. I don't see anything in the queue ‑‑ yeah, a question from Jim Reid in the Q&A.

"Some data on key length and key rotation values would be good, i.e. where is the trade‑off between short keys that change frequently and longer length keys?"

GEOFF HUSTON: You know, Jim, that's an interesting question. When NIST evaluate these algorithms they are not evaluated in the context of DNSSEC. They are evaluating them in the idea that can someone in the future bust open this document you'll encrypted? And the standard they look at in general is around 25 years when they evaluate an algorithm. And so their advice about RSA 1024 was based around current techniques of cracking that inside of, if you will, 20‑odd years, with anticipated improvements in computing.

The shorter you make that, of course, the situation changes. Do NIS publish a continuous graph of time versus, if you will, weakness? No they don't. So, at this point, the hands start waiving Jim and there is no hard data that I'm aware of on that. But, you know, I am not the cryptographer in the room here, maybe others have a better answer. Thanks.

DAVE KNIGHT: A question from Andrew Campling.

SPEAKER: (ANDREW CAMPLING): Hi. Good presentation as always Geoff, thanks for that. I just wanted to say, I think the one uncertainty here for anything related to quantum is no one really knows where the sort of development is for quantum, because I don't imagine for example the Chinese governments is likely to be that public about where its development is at, because obviously if they are hoping to use it as a sort of cyber weapon it's unlikely they are going to say that they have got, you know, X Q bit capability because at that would rather defeat the object of having a secret weapon. So that's the uncertainty here. None of us know what the actual developments or where they are at.

GEOFF HUSTON: Yes. Thank you, that's a good observation. But let me still point out, apples and oranges. If I encrypt the document, then that document needs to be encrypted for the lifetime of whatever secrets it described. When I use DNSSEC to actually protect a response, that protection is in the context of the parent key. It's in the context of validation. And once I roll the parent keys, it doesn't matter what happened to that old data. You cannot replay it. And so if I'm really looking at the DNS and DNSSEC as a replay protection mechanism, all these Windows that we refer to in quantum ‑‑ in cryptography and the weaknesses of quantum do not directly apply to DNSSEC because those Windows have shut down. It's not as if it's a secret. We're just trying to protect it not encrypt it and that's why the considerations are different and this is why, when I keep on hearing from various folk that RSA 1024 is lousy to actually do things like Dyn, the answer is well not quite. If we're talking about DNSSEC itself it's a different issue because it's all about the lifetime of the protection not the lifetime of the encrypted information. So, subtle distinction. Thank you.

DAVE KNIGHT: It looks like there are no more questions. Thanks again Geoff.

GEOFF HUSTON: Thank you.

DAVE KNIGHT: And we move on with Peter Thomassen.

PETER THOMASSEN: Right, I'll share my screen.

I hope you can see my slides too. So, I will talk to you today about the public sufficient if I say list and how to query it over DNS. I am Peter Thomassen, some you may know me through DeSEC, but today I am speaking in my employer's capacity.

So the public sufficient if I say list is a list under which all the suffixes are registered below which Internet users can register names, so like dotcom, but there is also longer ones like these ones. This list is managed by the community. And it informs about boundaries between organisations or policy boundaries in the domain space, so that you can know where the boundary of responsibility is between the parent and the child so to speak.

It's not exactly the same as uncut because it requires that you can publicly register stuff underneath essentially anyone could do that. It supports wild cards. So there could be something dot JP, I believe is even on the public suffix list. It also supports exceptions from wild cards which makes the whole thing a bit more complicated and it's a text file.

So before we go on, I'll quickly go over some use cases for the PS L for those people who haven't been dealing with it yet. Browser using it for cookie, scripting,ing to prevent phishing in the address bar. Also certificate authorities need to be aware of public sufficient figures is that they don't which wild card certificates for public second level suffixes. Also, we support the D Sec DNS hosting platform which is a multi‑talent platform and if you think of a customer creating co.uk that would be blocking all other users from creating new zones underneath that, so you have to make sure that people don't register public suffixes on a public DNS hosting platform. And also for DMARC, for the e‑mail authenticity protocol, it's reporting ‑‑ but any ways, it's used to figure out the organisational domain so that you can apply the demarc policy to it. So the organisational domain would be the one that's before the public suffix, for example co.uk, in some cases it's called the registerable domain.

Why use a PSL query service? There is lots of implementations for different programming languages and all that, why would you use a query service and not just have the PSL parsed and used within each place, for example browsers usually bring a copy of it. But if you have a software that does not very frequently updated then you have to somehow have a way to keep the public suffix list up to date the you also have to parse to list, because the wild card exceptions. And there is leverage for that, but ‑‑ it's a complicated matter, you have to make decisions on what to use.

And also if you want to do it yourself, you have to use a multi‑staged algorithm because of this wild card stuff, it's complicated. The parsing algorithm has six or seven steps if you look it up.
.
If you have a DNS based query service or any query service for that matter, you don't need applications to parse or refresh the public suffix list at all and you can retrieve it ad hoc and the DNS implementation we did it's just one lookup and it's cacheable, so that's nice. You don't need any specialised tooling because usually DNS tooling is around.

How it works. You choose a zone under which would you like to run the lookup service and each public suffix is stored as a PTR record and it has co.uk as the owner name and also the value. So there would be a record at the start domain point be to the co.uk public suffix. All other names, to other owner names below co.uk for example have a C name record pointing one level up, so that if you query the PTR record, you will be directed to the value by the C name routing so to speak. So you just have to query the PTR record.

Sometimes there is auxiliary rules that are also present next to the PTR record as a text record. That happens when there is multiple rules that are applied within the processing and, then for example a wild card exception overrules the wildcard so for people who like to debug we put TTL overruled rules as the text record. But to figure out the result you only need to look at the PTR outcome.

We implemented this under query dot public suffix dot zone.

So here is some examples of this, if you query for example indigo.dns.oarc.net.query.publicsuffix.zone, you see that there is the C‑Name pointing at net.dot.query.publicsuffix.zone and if you look for the PTR it's at net. So that's the public suffix for this domain. Another case for a longer public suffix is if you query s3.dualstack.eus1.amazon.aws.com. the lookup zone, you find that there is a PTR record directly at this name, which means this is in the public suffix list.

If you query S had however, that is not a public suffix, and because because of the terminals in between as 3 and dotcom it turns out you have to C names on all these things so that's a long chain, it's not very elegant but it's the only way of doing it in this data structure, so the S 4 thing directs you to the dual stack level and then it goes one level up to EU S and so on and so forth, at the end you are at the com owner name and that one has a PTR record for ‑‑ pointing at the com top level domain.

And also, if you have a wild card with an exception, I told you earlier there is PTR records, so the example here is www.ck, and it turns out in the last row here that there is a TXT record at that name which has wild card dot C K so that means actually any second level domain of .ck would be a public suffix, like I don't know if they have edu .ck. How are WW .ck is not a public suffix so there is an exception in the public suffix list and so the wild card doesn't apply and instead the public suffix is C K itself because the parsing algorithm has sort of generic over role like a high level rule that says there is a public suffix for any top level domain. So that's the star dot.

Those are the examples.

So there is implementations of that and also a demo, so I will not show the demo here, but it's easy to look at. So we have it on the query dot public dot suffix dot zone, you can view whatever you want. Also if you go to public suffix dot zone about the query in front there is a life demo it's just a JavaScript form and it uses Google's address over to do the query address. There is a Python implementation that we use at this to update the zone's content and extract sufficient from the public sufficient if I can list text filement. And so that is for reading in the actual list and producing the DNS output but the library can also just command line support for example to look up the public suffix for any name ad hoc. It can also check freshness and all these things.

Its provider agnostic. So if anyone else tries to provide this, it's just a small class one has to add to do the format conversion for that API and then that library will become compatible. I'm not aware of any other implementations in other languages for now.

It works perfectly well for the Internet use case that we have at the DNS hosting provider. So, you know, make sure people don't register public suffixes, there may be other use cases beyond that, I don't know, maybe that could be discussed. Perhaps those who also need different features so the public sufficient if I can list for example, first has all the ICANN approved public suffixes and then it has private ones like the Amazon thing, so maybe that will be something some use cases to be able to distinguish by, I don't know if that's necessary.

And when that you canning privately to some people from the community about this service it was suggested to make this kind of a more permanent part of the community infrastructure or something. We're completely open to that but I don't know if that makes any sense, if anyone is interested in that and what kind of oversight would be needed, who would run that and all that stuff. So we wanted to put it out there if that would be something that people would appreciate, we can certainly make it work somehow.

Thank you. That's the presentation. Any questions let me know and later after you digest the slides there is one or two slides in the backup on privacy if you are interested in those.

DAVE KNIGHT: Thank you Peter. We have one question in the Q&A from Geoff Huston. And he asks: "Whatever happened to the IETF debound effort that was supposed to do this in band in the DNS, is that something that you are aware of?"

PETER THOMASSEN: What's it called?

DAVE KNIGHT: It's a domain boundaries.

PETER THOMASSEN: I haven't heard of that but it's good to hear of it. I will read up on it. Yeah. I don't know what happened to that.

DAVE KNIGHT: All right. Do we have any other questions? Okay. Well then we'll move on. Thank you again Peter.

Now we have Peter Spacek is up next.

JOAO DAMAS: Let's put Anand first because we have an issue with a file from Peter.

DAVE KNIGHT: Okay, Anand, are you ready, willing and able to go ahead now?

ANAND BUDDHDEV: Hi. Yes, I am ready. Good day everyone. This is Anand from the RIPE NCC and I'll be doing a short presentation about the DNS activities at RIPE NCC over the past few months.

The first big announcement we have is that we have revamped our application for hosted DNS, so previously this used to be known as hosted k‑root, and we're now calling this hosted DNS, and this is because our new app can take applications for both k‑root and the RIPE NCC's other DNS service, which we call AuthDNS and you can view this app at the URL hosted ‑‑ DNS.ripe.net. If you are able to, we would request you to host and instance of AuthDNS, especially outside of Europe. The AuthDNS service of the RIPE NCC carries zones such as ripe.net, E 164.arp a, all of the RIPE NCC's reverse DNS zones, as well as the reverse DNS zones of all the other RIRs and also some ccTLDs. So it's a very important service and increasing coverage for this would be great.
.
One of the changes we have made in our requirements is that we are now accepting virtual machines. These of course need to be properly specced. They need to have enough RAM and disc space, but as long as a host is able to provide this, we will accept virtual machines for hosting either k‑root or or AuthDNS.
.
We launched some weeks ago and since then we already have some sites, active ones. We have an AuthDNS site in point a gross an in Brazil, and we also have a k‑root instance in the same place in point a gross a, and we have also just activated a k‑root service in Yogyakarta in Indonesia.

This is a map showing the hosted DNS instances. The blue dots represent a k‑root instance, and of course there are many of them because we have been expanding the k‑root service for quite some time already. But you'll also see two orange dots, and these represent will AuthDNS instances. We have one in Oslo in Norway, we have one in Rome, and there are two more that are not quite visible because they are aggregated and so there is one in Vienna and one in Ponta Grossa in Brazil.

Moving on. Our AuthDNS and k‑root core sites have also received upgrades. We have replaced several ageing servers with new Dell R440 servers, these are more CPUs and these are faster. We are also been replacing the routers, we have now Juniper and we have also started deploying Arista routers and at some of these sites we are now connected at 10g so that gives us a lot more capacity.

Moving on to something different. CDS for reverse DNS. This is something that had been requested of us, and earlier this year, in March, we activated it. I also reported about this at the previous RIPE meeting, and I would just like to say that this is just working and we do between 10 and 20 domain objects updates per month. There isn't much DNSSEC in the reverse DNS space, and even fewer people are using CDS or CDNSKEY and so we are not seeing much activity here, but it is there and it is available and working.

So earlier this year, the RIPE NCC switched to Algorithm 13 for all other zones, so this is ECDSA, CRYPT to have fee and our KSKs and ZSKs are stored together in that they are both on the signer. We do not keep our KSKs separately, they are not offline, and Algorithm 13 keys are also the same size, and they do offer stronger security, so there is also no frequent need for ZSK rollovers, and for this we also see no strong reason for separating the KSK and the ZSK, so when we approached the next rollover in the first quarter of 2022, we are going to switch to a combined signing key, a CS key. So this is be a single key that signs all the records in a zone.

And when I would like to talk about something else again. In the reverse DNS zones that we manage, the TTL, the default TTL for all the NS and DS records is two days, and we have noticed that this can cause delays for users when they want to change their delegation, when they want to change their NS records, or they want to perform DNSSEC key rollovers with a very long TTL on the DS records, their signer has to wait a long time before it can complete the rollover, and if there is an emergency of any kind, if there is a problem with DNSSEC, then the issue can linger because of this long TTL on the DS records. And many of you might already be aware that some of the large operators such as Google, Quad9, they limit the TTL of various records anyway because they are aware of the pain caused by lingering DS records in particular.

So, we want to propose lowering the TTLs for NS and DS records. We would like to lower the TTL of DS records to one hour, and we want to lower the TTL for NS records to one day. I think this will be more in line with what operators are doing these days.

Now this is something that affects our community of course, and if any of you have feedback about this, I'm happy to chat with you about this. I will also be sending an e‑mail out to the DNS Working Group, particularly with these proposals, and I hope that we get feedback that way as well.

And with that, I come to the end of my presentation. If there are any quick questions now, I'll be happy to take them. Otherwise you can also approach me directly and I might come over to SpatialChat later for some chat as well. Thank you very much.

DAVE KNIGHT: Thanks for that Anand. We are one question in the Q&A from Kurt Kayser:

"What helped the decision to place a node in Ponta Grossa Brazil, power stability, network bandwidth, co‑location costs, etc.?"

ANAND BUDDHDEV: That's a very good question, Kurt. So, we generally accept applications from most places, and we are particularly keen on placing instances in areas that are not so well connected to other DNS infrastructure. So, when we received an application from the university of point a gross a to host two instances, we talked to them a little bit, reviewed what kind of infrastructure was there and noticed that the area, the geographic and network area would benefit from instances. So, applicants come to us with applications, we review them and then we decide whether to accept them or not.

DAVE KNIGHT: Okay (I have three more questions and I am going to stop the line because we are already going to run over. The next question in the Q&A is from Peter Koch and he asks:

"Is the preference for non‑EU hosted DNS to be proposed in the NIS 2 directive?"

ANAND BUDDHDEV: No, thank you Peter. That is not at all. We would just like to increase the fingerprint of our AuthDNS service outside of the EU because we don't have much coverage outside of our service regions. That's all.

DAVE KNIGHT: Okay. And another question in the Q&A from Giovane Mueller. "It would be nice to measure the impact of the TTL changes in query volumes."

ANAND BUDDHDEV: Thank you for that Giovane. That feedback we will consider and we will try and measure the effect and report back if there is anything significant to report about that.

DAVE KNIGHT: Okay. Now I thought I had someone else requesting audio, but I don't see that any more. All right. Okay. Thanks again, Anand, and we will move on.

Our final talk of today is from Peter Spacek on BIND, and it is coming as a pre‑recorded video which I think Menno should be set‑up to play for us.

PETR SPACEK: Hello everyone. My name is pert Spacek and I work with something on benching BIND. In in presentation, I will try to convince to you upgrade from the ancient version of BIND 9.11 to ‑‑ which is almost at the end of its life ‑‑ to the current stable version 9.16, which is going to be supported for two more years, and also provides a significant performance improvements.

Before redive in, let me remind you that benchmarking DNS resolvers and authoritative servers are two completely different things.

Most importantly, a DNS resolver can have orders of magnitude more or less work with a single query, depending on the state of the resolver's cache. For that reason, the classical career per second metric almost doesn't make sense for a DNS resolver.

It's much better to use latency as the metric for the DNS resolvers because that reflects what the clients actually experience. Unfortunately, in this very short presentation, we don't have enough time to cover too much in detail. So I will refer you to all the RIPE documents to see and hear how we benchmark and why.

Most importantly, for getting test data results we have to start with a realistic dataset. For the measurements I am going to present here, we were using a dataset from a telco operator which had 20 resolvers handling a mix of land line and mobile clients. For the purposes of this talk, we defined so‑called load factor which practically means that we concentrate traffic from original distinct resolvers on a single machine. So if the load factor is for example 7, we concentrate the traffic originally going to 7 independent resolvers on a single machine which runs BIND.

So, let's have a look at the results. For latency, the result is more complex than a single number. It's actually a histogram, which can be advisealised using a percentile his diagram which was invented by PowerDNS fox. It's quite complicated if you see it for the first time, to let me walk you through the details.

In the middle of this chart, we can see a line, this is a visualisation of the latency for different percentiles of answers. Most importantly, the bottom left corner is where the smallest latency is. So we basically want to have this line to go as close as possible to the left bottom corner. If you go to the other side, to the top right corner, we are approaching the highest latency. So we don't want to go there.

Also, you can see that on the chart there is some background around the line, and that's visualisation of differences between different test runs. Obviously, when we are measuring with real data on the real Internet, the result is not perfectly reproduceable so we repeated every test nine times, and this wide background shows you the differences between minimum and maximum latency across nine test rounds. You can see that despite benchmarking, the results are nice and pretty stable.

The line in the middle is the average latency measured or computed from the nine test runs.
.
Okay, so now we have covered the basics and what does it mean in practice? So on the Y axis we have response latency time in milliseconds and on the X axis we have slowest percent tiles. So if you look at the point, on the X axis, number 10, the corresponding value on the Y axis is 1 milliseconds and it means that for 10% of queries, the answer arrived within 1 millisecond or later. And for the remaining 90% of queries, the answer arrived in 1 millisecond or faster.

So okay, that's it for the baseline measurement with the ancient version of BIND 9.11 and the load factor 1. So what happens if you push more traffic through the same version of BIND?
.
Here, we have a blue line which represents the measurement with load factor 7. So the traffic originally going to 7 independent resolvers is now going to one instance. And we can see on the blue line that it's closer to the bottom left corner, so it actually improves the latency.

The reason is that the more traffic on a single machine typically means higher cache trade which in the end means that the resolver has, on average, less work with a single query and thus, the resolver is able to handle more queries. Of course, this can not go to infinity, so what happens if you increase the traffic even more? With load factor 8, we can see on the yellow line here, that the latency is now worse than it used to be, so for 1% of queries, the latency now increased, so we can conclude that increasing the cache load further is not helping any more bus the resolver is already overloaded. So it's not good. And we have established the baseline where the ancient version of BIND, 9.11 is, on our dataset, able to handle 7 times the basic volume of traffic going to a single resolver on average.

Okay, so now when we know the baseline, we can compare the behaviour of the ancient version of BIND 9.11nd a the current stable 9.16.

On the green line here, we can see that the new version is actually much better in handling the cache missed queries, because it has significantly improved latency for cache missed queries while handling the cache hit queries all the same. That's very good news because it means that if you just drop in the new version 9.16 in place of the ancient 9.11 your resolver with typical telco traffic will increase performance, or in other words, better latency for users, and that's it.

Of course, it invites another question: How far can we push the 9 of 16? So if we increase the load even more with load factor 14, we can see that the new version is still better than the old one, except for 4% of queries in the middle where the performance is approximately 1 millisecond, which is the difference between these lines for 4% of queries. And I think that's pretty good, because that allows you to concentrate more traffic on a single machine if you want.

We are short on time, so let me conclude.

Please upgrade and don't hesitate to upgrade to the new version because it will actually improve latency of your resolvers.

Thank you for your attention.

DAVE KNIGHT: Thank you for that pre‑recorded, Peter. Peter is here and able to answer questions. We don't have much time. We are already run over. We don't have any in the Q&A. I have a question. You speak in terms of load rights and compared to a baseline. Do you have numbers ‑‑ it's hard for me to understand without actual query rates and response rates in terms of numbers of queries per second. Can you give us an idea of those?

PETR SPACEK: I am intentionally not showing any numbers because with a different dataset it would be different. I mean if it was like IoT thing which asks cache missed query every time, the QPSs would mean something else than for telco traffic. For the resolvers the QPS just doesn't make sense. That's a wrong question, sorry.

DAVE KNIGHT: Fair enough.

PETR SPACEK: I would love to make a talk just about that but jets much longer. Than we can cover here.

DAVE KNIGHT: We now have a question in the "hi Peter what causes that some fraction of the queries is answered in exactly 1 millisecond?"

PETR SPACEK: Exactly 1 millisecond. Well it's mostly cache hits, plus the measurement ‑‑ it's like sub‑a millisecond measurements it's not that precise, so basically whatever is below 1 millisecond, we don't need to care that much.

DAVE KNIGHT: Okay. And we have a question from Gert Döring.

"Can we have a comparison across different resolvers for the next DNS Working Group meeting? Much has happened in that field, not P DNS recursive, not bound Unbound since I saw the last comparisons."

I guess that's a question for the whole group.

PETR SPACEK: Actually if you look at the slides, I linked from the presentation there is a link to RIPE 79, and you can find some comparisons there with like a huge caveat that it doesn't compare exactly because it very much depends on all the configuration knots, because my take away is that you have to compare exactly the configuration you want to use in your particular deployment because if you enable, let's say some filtering, RPZ or something, the results might be completely different than with different configurations. So I mean of course I can produce charts, but it will not be very fair because, you know, there is a million knots in every single DNS implementation.

DAVE KNIGHT: All right. Thank you.

If there are no more questions. Then thank you again Peter. And that concludes the RIPE 83 session of the DNS Working Group. Apologies we're about nine minutes over time. Shane, Jao, do you have anything you want to add?

SHANE KERR: I just want to say thank you Dave for all your hard work in chairing the Working Group with us for the past while here. And we're going to miss you and hopefully we'll see you around as a non‑Chair.

DAVE KNIGHT: Thank you it's been my pleasure. Yeah I hope to see you guys and everyone else in Berlin in May.

SHANE KERR: All right. Bye everyone.