- 26 Jan, 2018 2 commits
-
-
Simon Kelley authored
-
Leon M. George authored
-
- 21 Jan, 2018 2 commits
-
-
Simon Kelley authored
-
Simon Kelley authored
-
- 20 Jan, 2018 2 commits
-
-
Simon Kelley authored
-
Simon Kelley authored
-
- 19 Jan, 2018 3 commits
-
-
Simon Kelley authored
It's OK for NSEC records to be expanded from wildcards, but in that case, the proof of non-existence is only valid starting at the wildcard name, *.<domain> NOT the name expanded from the wildcard. Without this check it's possible for an attacker to craft an NSEC which wrongly proves non-existence in a domain which includes a wildcard for NSEC.
-
Neil Jerram authored
-
Artem Poloznikov authored
-
- 15 Jan, 2018 3 commits
-
-
Simon Kelley authored
-
Simon Kelley authored
-
Ville Skyttä authored
-
- 14 Jan, 2018 1 commit
-
-
Geert Stappers authored
Development of EtherBoot gPXE was always development of iPXE core developer Michael Brown. http://git.etherboot.org/?p=gpxe.git was last updated in 2011 https://git.ipxe.org/ipxe.git is well alive This s/gPXE/iPXE/ reflects that. Signed-off-by:
Geert Stappers <stappers@stappers.nl>
-
- 07 Jan, 2018 1 commit
-
-
Simon Kelley authored
RFC 4034 says: [RFC2181] specifies that an RRset is not allowed to contain duplicate records (multiple RRs with the same owner name, class, type, and RDATA). Therefore, if an implementation detects duplicate RRs when putting the RRset in canonical form, it MUST treat this as a protocol error. If the implementation chooses to handle this protocol error in the spirit of the robustness principle (being liberal in what it accepts), it MUST remove all but one of the duplicate RR(s) for the purposes of calculating the canonical form of the RRset. We chose to handle this robustly, having found at least one recursive server in the wild which returns duplicate NSEC records in the AUTHORITY section of an answer generated from a wildcard record. sort_rrset() is therefore modified to delete duplicate RRs which are detected almost for free during the bubble-sort process. Thanks to Toralf Förster for helping to diagnose this problem.
-
- 03 Jan, 2018 1 commit
-
-
Simon Kelley authored
-
- 02 Jan, 2018 1 commit
-
-
Simon Kelley authored
-
- 15 Dec, 2017 3 commits
-
-
Simon Kelley authored
-
Simon Kelley authored
-
Simon Kelley authored
-
- 06 Dec, 2017 2 commits
-
-
Simon Kelley authored
If all configured dns servers return refused in response to a query; dnsmasq will end up in an infinite loop retransmitting the dns query resulting into high CPU load. Problem is caused by the dns refuse retransmission logic which does not check for the end of a dns server list iteration in strict mode. Having one configured dns server returning a refused reply easily triggers this problem in strict order mode. This was introduced in 9396752c Thanks to Hans Dedecker <dedeckeh@gmail.com> for spotting this and the initial patch.
-
Simon Kelley authored
-
- 02 Dec, 2017 2 commits
-
-
Simon Kelley authored
-
Josh Soref authored
-
- 01 Dec, 2017 1 commit
-
-
李三0159 authored
-
- 16 Nov, 2017 1 commit
-
-
Simon Kelley authored
-
- 08 Nov, 2017 2 commits
-
-
Dr. Markus Waldeck authored
-
Simon Kelley authored
-
- 06 Nov, 2017 1 commit
-
-
Petr Menšík authored
Some of our Openstack users run quite large number of dnsmasq instances on single host. They started hitting default limit of inotify socket number on single system after upgrade to more recent version. System defaults of sysctl fs.inotify.max_user_instances is 128. They reached limit of 116 dnsmasq instances, then more instances failed to start. I was surprised they have any use case for such high number of instances. They use one dnsmasq for one virtual network. I found simple way to avoid hitting low system limit. They do not use resolv.conf for name server configuration or any dhcp hosts or options directory. Created inotify socket is never used in that case. Simple patch attached. I know we can raise inotify system limit. I think better is to not waste resources that are left unused.
-
- 31 Oct, 2017 2 commits
-
-
Simon Kelley authored
-
Simon Kelley authored
Unless we are acting in authoritative mode, obviously. To do otherwise may allows cache snooping, see. http://cs.unc.edu/~fabian/course_papers/cache_snooping.pdf
-
- 30 Oct, 2017 1 commit
-
-
Simon Kelley authored
-
- 28 Oct, 2017 5 commits
-
-
Simon Kelley authored
This is set to status DoNotImplement in RFC 6944.
-
Simon Kelley authored
-
Simon Kelley authored
-
Simon Kelley authored
-
Simon Kelley authored
-
- 26 Oct, 2017 1 commit
-
-
Simon Kelley authored
The current logic is naive in the case that there is more than one RRset in an answer (Typically, when a non-CNAME query is answered by one or more CNAME RRs, and then then an answer RRset.) If all the RRsets validate, then they are cached and marked as validated, but if any RRset doesn't validate, then the AD flag is not set (good) and ALL the RRsets are cached marked as not validated. This breaks when, eg, the answer contains a validated CNAME, pointing to a non-validated answer. A subsequent query for the CNAME without do will get an answer with the AD flag wrongly reset, and worse, the same query with do will get a cached answer without RRSIGS, rather than being forwarded. The code now records the validation of individual RRsets and that is used to correctly set the "validated" bits in the cache entries.
-
- 14 Oct, 2017 3 commits
-
-
Simon Kelley authored
Mainly code-size and readability fixes. Also return NULL from do_rfc1035_name() when limit exceeded, so that truncated bit gets set in answer.
-
Simon Kelley authored
-
Simon Kelley authored
The logic to determine is an EDNS0 header was added was wrong. It compared the packet length before and after the operations on the EDNS0 header, but these can include adding options to an existing EDNS0 header. So a query may have an existing EDNS0 header, which is extended, and logic thinks that it had a header added de-novo. Replace this with a simpler system. Check if the packet has an EDSN0 header, do the updates/additions, and then check again. If it didn't have one initially, but it has one laterly, that's the correct condition to strip the header from a reply, and to assume that the client cannot handle packets larger than 512 bytes.
-