{"id":1888,"date":"2026-01-26T09:42:43","date_gmt":"2026-01-26T08:42:43","guid":{"rendered":"https:\/\/virtualguru.cz\/?p=1888"},"modified":"2026-01-26T09:42:43","modified_gmt":"2026-01-26T08:42:43","slug":"proc-na-dns-a-mtu-skutecne-zalezi-supervisor-story","status":"publish","type":"post","link":"https:\/\/virtualguru.cz\/en\/2026\/01\/26\/proc-na-dns-a-mtu-skutecne-zalezi-supervisor-story\/","title":{"rendered":"Pro\u010d na DNS a MTU skute\u010dn\u011b z\u00e1le\u017e\u00ed &#8211; Supervisor Story"},"content":{"rendered":"<p>Ned\u00e1vno jsem u jednoho z\u00e1kazn\u00edka \u0159e\u0161il nasazen\u00ed Supervisor clusteru. Celkem trivi\u00e1ln\u00ed v\u011bc, \u0159eknete se si, ale narazili jsme na n\u011bkolik v\u011bc\u00ed, kter\u00e9 zat\u00edm nebyly pln\u011b zdokumentovan\u00e9 v \u017e\u00e1dn\u00e9m KB.<\/p>\n<p>Nasazen\u00ed v prvn\u00ed f\u00e1zi prob\u00edhalo celkem norm\u00e1ln\u011b.<\/p>\n<ol>\n<li>Cluster s NSX\n<ul>\n<li>Z\u00e1kazn\u00edk pou\u017e\u00edv\u00e1 NSX, resp licen\u010dn\u011b m\u00e1 VCF, ale instalovan\u00e9 po \u010d\u00e1stech na verzi vSphere 8.<\/li>\n<\/ul>\n<\/li>\n<li>Storage policies\n<ul>\n<li>I kdy\u017e z\u00e1kazn\u00edk nepou\u017e\u00edv\u00e1 vSAN, pro nasazen\u00ed Tanzu (vSphere Kubernetes Services) je pot\u0159eba m\u00edt vytvo\u0159en\u00e9 Storage policy, aby K8S ved\u011bl kam um\u00eds\u0165ovat persistentn\u00ed objekty.\u00a0<\/li>\n<li>Mus\u00edte vytvo\u0159it Tagy pro Storage policies<\/li>\n<li>P\u0159i\u0159adit tyto Tagy odpov\u00eddaj\u00edc\u00edm Datastores, aby p\u0159i v\u00fdb\u011bru Storage policy byl alespo\u0148 jeden Datastore &#8222;Compatible&#8220;<\/li>\n<li>Vytvo\u0159it Storage Policies, pokud nem\u00e1te vSAN, budou zalo\u017eeny jen na Tag based placement.<\/li>\n<\/ul>\n<\/li>\n<li>Load Balancer\n<ul>\n<li>Mus\u00edte si p\u0159i vytv\u00e1\u0159en\u00ed Supervisor vybrat, zda budete pou\u017e\u00edvat NSX load balancer, nebo m\u00e1te AVI. AVI se licencuje zvl\u00e1\u0161\u0165, ale m\u00e1 v\u00edce funkcionalit. Mus\u00edte si v\u017edy ud\u011blat rozvahu, zda se V\u00e1m to vyplat\u00ed \u010di nikoli.<\/li>\n<li>Z\u00e1kazn\u00edk si vybral &#8222;jen&#8220; NSX.<\/li>\n<\/ul>\n<\/li>\n<li>3 IP subnety, kter\u00e9 jsou routovateln\u00e9\n<ul>\n<li>1 pro Supervisor MGMT &#8211; lze pou\u017e\u00edt VDS portgroup.\n<ul>\n<li>5 po sob\u011b jdouc\u00edch IP adres<\/li>\n<\/ul>\n<\/li>\n<li>1 pro Ingress &#8211; Slu\u017eby, kter\u00e9 budou publikovan\u00e9 ven p\u0159es LB<\/li>\n<li>1 pro Egress &#8211; p\u0159\u00edstup v\u0161ech kontejner\u016f a TKG cluster\u016f do internetu &#8222;SNAT&#8220;<\/li>\n<\/ul>\n<\/li>\n<li>2 IP subnety pro intern\u00ed K8S konektivitu\n<ul>\n<li>1 subnet pro K8S konektivitu Supervisor Services pod\u016f a TKG cluster\u016f<\/li>\n<li>1 subnet pro K8S service objekty<\/li>\n<li>Subnety se nesm\u00ed p\u0159ekr\u00fdvat s jin\u00fdmi supervisor subnety<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p><!--more--><\/p>\n<p>Ve chv\u00edli kdy spust\u00edte Wizarda a zad\u00e1te tam v\u0161echny pot\u0159ebnosti, tak u\u017e m\u016f\u017eete pouze sledovat, co se v\u0161echno d\u011bje.<\/p>\n<p>vCenter m\u00e1 slu\u017ebu EAM, kter\u00e1 m\u00e1 v\u0161e na starosti.<\/p>\n<ol>\n<li>vygeneruje hesla pro root \u00fa\u010det v r\u00e1mci Supervisor Control VMs. Toto heslo si m\u016f\u017eete pro troubleshooting zjistit, kdy\u017e se p\u0159\u00edhl\u00e1s\u00edte na vCenter pomoc\u00ed root a spust\u00edte n\u00e1sleduj\u00edc\u00ed p\u0159\u00edkaz:\n<ul>\n<li>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">\/usr\/lib\/vmware-wcp\/decryptK8Pwd.py<\/pre>\n<\/li>\n<\/ul>\n<\/li>\n<li>Z OVF vytvo\u0159\u00ed 1\/3 Supervisor VM\n<ul>\n<li>To jestli vytvo\u0159\u00ed 1 nebo 3 z\u00e1le\u017e\u00ed na tom, zda vyberete &#8222;HA&#8220; re\u017eim \u010di nikoli. Toto je ale dostupn\u00e9 a\u017e ve verzi 9. V p\u0159edchoz\u00edch verz\u00edch v\u017edy vytv\u00e1\u0159el 3.<\/li>\n<\/ul>\n<\/li>\n<li>Po startu Supervisor VM vytvo\u0159\u00ed mezi nimi K8S control plane cluster\n<ul>\n<li>Obdobn\u011b jako <strong>kubeadm init<\/strong><\/li>\n<li><strong>kubeadm join<\/strong><\/li>\n<\/ul>\n<\/li>\n<li>V Supervisor Control Plane VMs se n\u00e1sledn\u011b za\u010dnou spou\u0161t\u011bt kontejnery pro obsluhu\n<ul>\n<li>Container Storage Interface<\/li>\n<li>Cert Manager<\/li>\n<li>Network Operator<\/li>\n<li>NSX-NCP<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>A pr\u00e1v\u011b p\u0159i spou\u0161t\u011bn\u00ed n\u011bkter\u00fdch slu\u017eeb, jako jsou CSI a NSX-NCP za\u010dne Supervisor komunikovat s vCenter a NSX managerem.<\/p>\n<p>Dal\u0161\u00ed kroky, kter\u00e9 nastanou (by m\u011bly nastat):<\/p>\n<ol>\n<li>vytvo\u0159en\u00ed Segment\u016f, do kter\u00fdch se p\u0159ipoj\u00ed Supervisor VM a pomoc\u00ed kter\u00fdch komunikuj\u00ed K8S pody mezi sebou<\/li>\n<li>Vytvo\u0159en\u00ed LB service v NSX<\/li>\n<li>Vytvo\u0159en\u00ed DFW pravidel<\/li>\n<li>Konfigurace ESX host\u016f jako K8S worker node.<\/li>\n<\/ol>\n<p>Vzhledem k tomu, \u017ee dal\u0161\u00ed kroky u\u017e prov\u00e1d\u00ed SV, tak je pro n\u011bj nutn\u00e1 konektivita s vCenter a NSX a k tomu se vyu\u017e\u00edvaj\u00ed DNS n\u00e1zvy. U tohoto kroku jsme se dostali do probl\u00e9mu, proto\u017ee p\u0159eklad DNS n\u00e1m z n\u011bjak\u00e9ho probl\u00e9mu drhnul. Nutno zm\u00ednit, \u017ee pokud pou\u017e\u00edv\u00e1te dom\u00e9nu s koncovkou <strong>.local<\/strong>, tak mus\u00edte zadat jako search domain do Wizarda k vytv\u00e1\u0159en\u00ed Workload Management.<a href=\"https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-235-1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-1895 size-medium\" src=\"https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-235-1-300x129.png\" alt=\"\" width=\"300\" height=\"129\" srcset=\"https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-235-1-300x129.png 300w, https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-235-1-150x65.png 150w, https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-235-1-768x331.png 768w, https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-235-1-18x8.png 18w, https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-235-1.png 868w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a> <a href=\"https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-236.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-1896 size-medium\" src=\"https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-236-300x147.png\" alt=\"\" width=\"300\" height=\"147\" srcset=\"https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-236-300x147.png 300w, https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-236-1024x502.png 1024w, https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-236-150x74.png 150w, https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-236-768x377.png 768w, https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-236-1536x753.png 1536w, https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-236-18x9.png 18w, https:\/\/virtualguru.cz\/wp-content\/uploads\/2026\/01\/Vguru-Image-236.png 1646w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>Kdy\u017e jsme se p\u0159ipojili do SV, tak klasick\u00e1 kontrola pomoc\u00ed ping a curl pro\u0161la. Co\u017e spoustu KB zmi\u0148uje, \u017ee m\u00e1te vyzkou\u0161et.<br \/>\n<a href=\"https:\/\/knowledge.broadcom.com\/external\/article\/389329\/vspherecsicontroller-pods-and-nsxncp-pod.html\">https:\/\/knowledge.broadcom.com\/external\/article\/389329\/vspherecsicontroller-pods-and-nsxncp-pod.html<\/a><br \/>\nCo ale d\u011blat, kdy\u017e toto projde a NSX-NCP je st\u00e1le ve stavu <strong>CrashLoopBackOff<\/strong>?<\/p>\n<p>Se supportem jsme se dostali o kus d\u00e1le:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">alpine:~# dig vcenter.mojedomena.local A\r\n\r\n; &lt;&lt;&gt;&gt; DiG 9.18.41 &lt;&lt;&gt;&gt; vcenter.mojedomena.local A\r\n;; global options: +cmd\r\n;; Got answer:\r\n;; WARNING: .local is reserved for Multicast DNS\r\n;; You are currently testing what happens when an mDNS query is leaked to DNS\r\n;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: 53111\r\n;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1\r\n\r\n;; OPT PSEUDOSECTION:\r\n; EDNS: version: 0, flags:; udp: 512\r\n;; QUESTION SECTION:\r\n;vcenter.mojedomena.local.         IN      A\r\n\r\n;; ANSWER SECTION:\r\nvcenter.mojedomena.local.  1       IN      A       192.168.10.6\r\n\r\n;; Query time: 0 msec\r\n;; SERVER: 192.168.10.1#53(192.168.10.1) (UDP)\r\n;; WHEN: Fri Jan 02 11:26:33 CET 2026\r\n;; MSG SIZE  rcvd: 66\r\n\r\n\r\nalpine:~# dig vcenter.mojedomena.local AAAA\r\n\r\n; &lt;&lt;&gt;&gt; DiG 9.18.41 &lt;&lt;&gt;&gt; vcenter.mojedomena.local AAAA\r\n;; global options: +cmd\r\n;; Got answer:\r\n;; WARNING: .local is reserved for Multicast DNS\r\n;; You are currently testing what happens when an mDNS query is leaked to DNS\r\n;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: <b>NXDOMAIN<\/b>, id: 32530\r\n;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1\r\n\r\n;; OPT PSEUDOSECTION:\r\n; EDNS: version: 0, flags:; udp: 512\r\n;; QUESTION SECTION:\r\n;vcenter.mojedomena.local.         IN      AAAA\r\n\r\n;; AUTHORITY SECTION:\r\n.                       0       IN      SOA     a.root-servers.net. nstld.verisign-grs.com. 2026010200 1800 900 604800 86400\r\n\r\n;; Query time: 40 msec\r\n;; SERVER: 192.168.10.1#53(192.168.10.1) (UDP)\r\n;; WHEN: Fri Jan 02 11:26:21 CET 2026\r\n;; MSG SIZE  rcvd: 125<\/pre>\n<h2>DNS<\/h2>\n<p>Probl\u00e9m je v tom, \u017ee pokud DNS server nen\u00ed pro dom\u00e9nu autoritativn\u00ed, tak se m\u016f\u017ee st\u00e1t, \u017ee dotaz na nezn\u00e1m\u00fd z\u00e1znam p\u0159ed\u00e1 na nad\u0159azen\u00fd DNS server. Pokud takov\u00fd nad\u0159azen\u00fd server je a\u017e root DNS, tak tam rozhodn\u011b dom\u00e9na .local nem\u00e1 co d\u011blat a skon\u010d\u00ed to odpov\u011bd\u00ed NXDOMAIN.<\/p>\n<p>Se supprotem jsme je\u0161t\u011b \u0159e\u0161ili pro\u010d se toto d\u011bje jen pro NSX-NCP. Dal\u0161\u00ed test, kter\u00fd to trochu osv\u011btluje.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">root [ \/ ]# tdnf install python3-pip\r\n\r\nroot [ \/ ]# pip install eventlet==0.33.3\r\n  \r\nroot [ \/ ]# python3\r\n\r\n&gt;&gt;&gt; import socket\r\n&gt;&gt;&gt; import eventlet\r\n&gt;&gt;&gt; socket.create_connection((\"vcenter.mojedomena.local\", 443), 5)\r\n&lt;socket.socket fd=3, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.250.20', 54718), raddr=('192.168.10.6', 443)&gt;\r\n\r\n&gt;&gt;&gt; <b>eventlet.monkey_patch()<\/b>\r\n&gt;&gt;&gt; socket.create_connection((\"vcenter.mojedomena.local\", 443), 5)\r\n\r\nTraceback (most recent call last):\r\n  File \"\/usr\/lib\/python3.10\/site-packages\/eventlet\/support\/greendns.py\", line 456, in resolve\r\n    return _proxy.query(name, rdtype, raise_on_no_answer=raises,\r\n  File \"\/usr\/lib\/python3.10\/site-packages\/eventlet\/support\/greendns.py\", line 412, in query\r\n    return end()\r\n  File \"\/usr\/lib\/python3.10\/site-packages\/eventlet\/support\/greendns.py\", line 391, in end\r\n    raise result[1]\r\n  File \"\/usr\/lib\/python3.10\/site-packages\/eventlet\/support\/greendns.py\", line 372, in step\r\n    a = fun(*args, **kwargs)\r\n  File \"\/usr\/lib\/python3.10\/site-packages\/dns\/resolver.py\", line 1371, in query\r\n    return self.resolve(\r\n  File \"\/usr\/lib\/python3.10\/site-packages\/dns\/resolver.py\", line 1328, in resolve\r\n    timeout = self._compute_timeout(start, lifetime, resolution.errors)\r\n  File \"\/usr\/lib\/python3.10\/site-packages\/dns\/resolver.py\", line 1084, in _compute_timeout\r\n    raise LifetimeTimeout(timeout=duration, errors=errors)\r\ndns.resolver.LifetimeTimeout: The resolution lifetime expired after 5.106 seconds: Server Do53:127.0.0.53@53 answered udp() got an unexpected keyword argument 'ignore_errors'; Server Do53:127.0.0.53@53 answered udp() got an unexpected keyword argument 'ignore_errors'; Server Do53:127.0.0.53@53 answered udp() got an unexpected keyword argument 'ignore_errors'; Server Do53:127.0.0.53@53 answered udp() got an unexpected keyword argument 'ignore_errors'; Server Do53:127.0.0.53@53 answered udp() got an unexpected keyword argument 'ignore_errors'; Server Do53:127.0.0.53@53 answered udp() got an unexpected keyword argument 'ignore_errors'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n  File \"&lt;stdin&gt;\", line 1, in &lt;module&gt;\r\n  File \"\/usr\/lib\/python3.10\/site-packages\/eventlet\/green\/socket.py\", line 44, in create_connection\r\n    for res in getaddrinfo(host, port, 0, SOCK_STREAM):\r\n  File \"\/usr\/lib\/python3.10\/site-packages\/eventlet\/support\/greendns.py\", line 549, in getaddrinfo\r\n    qname, addrs = _getaddrinfo_lookup(host, family, flags)\r\n  File \"\/usr\/lib\/python3.10\/site-packages\/eventlet\/support\/greendns.py\", line 522, in _getaddrinfo_lookup\r\n    raise err\r\n  File \"\/usr\/lib\/python3.10\/site-packages\/eventlet\/support\/greendns.py\", line 511, in _getaddrinfo_lookup\r\n    answer = resolve(host, qfamily, False, use_network=use_network)\r\n  File \"\/usr\/lib\/python3.10\/site-packages\/eventlet\/support\/greendns.py\", line 464, in resolve\r\n    raise EAI_EAGAIN_ERROR\r\n  File \"\/usr\/lib\/python3.10\/site-packages\/eventlet\/support\/greendns.py\", line 511, in _getaddrinfo_lookup\r\n    answer = resolve(host, qfamily, False, use_network=use_network)\r\n  File \"\/usr\/lib\/python3.10\/site-packages\/eventlet\/support\/greendns.py\", line 464, in resolve\r\n    raise EAI_EAGAIN_ERROR\r\nsocket.gaierror: [Errno -3] Lookup timed out\r\n<\/pre>\n<p>Proto\u017ee v DNS RFC (<a href=\"https:\/\/www.ietf.org\/rfc\/rfc4074.txt\">https:\/\/www.ietf.org\/rfc\/rfc4074.txt<\/a>) je zm\u00edn\u011bno:<\/p>\n<blockquote>\n<p><em>Many existing DNS clients (resolvers) that support IPv6 first search for AAAA Resource Records (RRs) of a target host name, and then for A RRs of the same name. This fallback mechanism is based on the DNS specifications, which if not obeyed by authoritative servers, can produce unpleasant results. In some cases, for example, a web browser fails to connect to a web server it could otherwise reach. In the following sections, this memo describes some typical cases of such misbehavior and its (bad) effects.<\/em><\/p>\n<\/blockquote>\n<p>A toto je p\u0159\u00edpad NSX-NCP, kde se pou\u017eije eventlet.monkey_patch(), kdy se bude chovat k DNS velmi striktn\u011b. Tud\u00ed\u017e nejprve AAAA a pak A z\u00e1znam.<\/p>\n<p>\u0158e\u0161en\u00ed jsou 2. Jeden Workaround, kter\u00fd funguje jen do dal\u0161\u00edho update Supervisoru, spo\u010d\u00edv\u00e1 v \u00faprav\u011b specifikace NSX-NCP deploymentu t\u011bsn\u011b pot\u00e9, co se za\u010dnou vytv\u00e1\u0159et v\u0161echny obslu\u017en\u00e9 deploymenty.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">root@423826f144b646d762b12a7399ac0ddd [ ~ ]# kubectl -n vmware-system-nsx get deployment\/nsx-ncp -oyaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  annotations:\r\n    deployment.kubernetes.io\/revision: \"5\"\r\n    kubectl.kubernetes.io\/last-applied-configuration: |\r\n   ...\r\n  generation: 5\r\n  labels:\r\n    component: nsx-ncp\r\n    tier: nsx-networking\r\n    version: v1\r\n  name: nsx-ncp\r\n  namespace: vmware-system-nsx\r\n  resourceVersion: \"9178866\"\r\n  uid: e9759f64-754b-4d14-a3b6-91f66029161d\r\nspec:\r\n  progressDeadlineSeconds: 600\r\n  replicas: 2\r\n  revisionHistoryLimit: 10\r\n  selector:\r\n    matchLabels:\r\n      component: nsx-ncp\r\n  strategy:\r\n    rollingUpdate:\r\n      maxSurge: 1\r\n      maxUnavailable: 1\r\n    type: RollingUpdate\r\n  template:\r\n    metadata:\r\n      annotations:\r\n        kubectl.kubernetes.io\/restartedAt: \"2025-12-10T11:30:18Z\"\r\n        last-sync: \"1763989124.2055643\"\r\n        prometheus.io\/port: \"8001\"\r\n        prometheus.io\/scrape: \"true\"\r\n      creationTimestamp: null\r\n      labels:\r\n        component: nsx-ncp\r\n        tier: nsx-networking\r\n        version: v1\r\n    spec:\r\n      affinity:\r\n        podAntiAffinity:\r\n          requiredDuringSchedulingIgnoredDuringExecution:\r\n          - labelSelector:\r\n              matchExpressions:\r\n              - key: component\r\n                operator: In\r\n                values:\r\n                - nsx-ncp\r\n            topologyKey: kubernetes.io\/hostname\r\n      containers:\r\n      - env:\r\n        <b>- name: EVENTLET_NO_GREENDNS                        &lt;===============\r\n          value: \"yes\"                                      &lt;===============<\/b>\r\n        - name: NCP_NAME\r\n          valueFrom:\r\n            fieldRef:\r\n              apiVersion: v1\r\n              fieldPath: metadata.name\r\n        - name: NCP_NAMESPACE\r\n          valueFrom:\r\n            fieldRef:\r\n              apiVersion: v1\r\n              fieldPath: metadata.namespace \r\n<\/pre>\n<p>Druh\u00e9 \u0159e\u0161en\u00ed by m\u011blo b\u00fdt persistentn\u00ed a to, zajistit aby DNS server nep\u0159epos\u00edlal AAAA z\u00e1znamy mimo. Nastavit jej jako <strong>Autoritativn\u00ed <\/strong>pro va\u0161i .local dom\u00e9nu.<\/p>\n<h2>MTU<\/h2>\n<p>P\u0159i simulaci tohoto probl\u00e9mu jsme se pak dostali k druh\u00e9 v\u011bci, kter\u00e1 na prvn\u00ed pohled nebyla v\u016fbec jasn\u00e1.<\/p>\n<p>Museli jsme vytvo\u0159it nov\u00e9 prost\u0159ed\u00ed, ve kter\u00e9m bychom mohli celou situaci nasimulovat. Kdy\u017e jsem to zkou\u0161el u sebe, tak se mi to neda\u0159ilo, ale nem\u011bl jsem tam \u00fapln\u011b v\u0161echny prvky k dispozici. Zejm\u00e9na Fortigate, kter\u00fd obsluhoval DNS. Tak jsme se rozhodli ke kroku, vytvo\u0159it Nested prost\u0159ed\u00ed nad st\u00e1vaj\u00edc\u00edm clusterem, abychom nemuseli sh\u00e1n\u011bt dal\u0161\u00ed 4 ESX hosty. Ve verzi 8 toti\u017e Supervisor vy\u017eaduje 3 Control Plane VM, tzn. bez 4 host\u016f v clusteru V\u00e1s nepust\u00ed pr\u016fvodce d\u00e1l.<\/p>\n<p>Hosty vytvo\u0159eny, p\u0159ipojeny do vCenter, NSX nakonfigurov\u00e1n. Jakmile jsme ale cht\u011bli ud\u011blat testovac\u00ed VM, kter\u00e1 bude komunikovat s vCenter, tak jsme narazili na probl\u00e9m. ICMP pro\u0161lo, curl na HTTP pro\u0161lo &#8211; p\u0159i\u0161la odpov\u011b\u010f. Ale jakmile jsme se pokou\u0161eli nav\u00e1zat HTTPS spojen\u00ed, tak to kon\u010dilo ne\u00fasp\u011bchem. S jinou VM, ve stejn\u00e9 s\u00edti, ale mimo Nested prost\u0159ed\u00ed v\u0161e fungovalo.<\/p>\n<p>Dlouho jsem si l\u00e1mal hlavu, v \u010dem je to prost\u0159ed\u00ed jin\u00e9. Bylo to nested, ale t\u011bch jsem vytvo\u0159il za celou svou dobu s VMware spoustu a nikdy jsem se s t\u00edmto probl\u00e9mem nesetkal. P\u0159em\u00fd\u0161lel jsem i nad omezen\u00fdmi instruk\u010dn\u00edmi sadami nov\u00fdch procesor\u016f a porovn\u00e1val, co VM dost\u00e1vaj\u00ed v jin\u00fdch ESX hostech.<\/p>\n<p>Jak nadpis napov\u00edd\u00e1, probl\u00e9m byl v MTU. Kdy Nested hosty byly p\u0159ipojen\u00e9 do jin\u00e9 Overlay VLAN a MTU routeru mezi segmenty zapomn\u011blo b\u00fdt zm\u011bn\u011bno a z\u016fstalo na 1500.<\/p>\n<p>Jak se to projevovalo jsem zaznamenal p\u0159i zachyt\u00e1v\u00e1n\u00ed packet\u016f<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">12:14:06.524329 IP 192.168.250.11.59648 &gt; vcenter.mojedomena.local.https: Flags [S], seq 2119065766, win 64240, options [<b>mss 1460<\/b>,sackOK,TS val 3900574725 ecr 0,nop,wscale 7], length 0\r\n12:14:06.525493 IP vcenter.mojedomena.local.https &gt; 192.168.250.11.59648: Flags [S.], seq 3546398388, ack 2119065767, win 28960, options [mss 1460,sackOK,TS val 1399004370 ecr 3900574725,nop,wscale 8], length 0\r\n12:14:06.525602 IP 192.168.250.11.59648 &gt; vcenter.mojedomena.local.https: Flags [.], ack 1, win 502, options [nop,nop,TS val 3900574727 ecr 1399004370], length 0\r\n12:14:06.528653 IP 192.168.250.11.59648 &gt; vcenter.mojedomena.local.https: Flags [P.], seq 1:518, ack 1, win 502, options [nop,nop,TS val 3900574729 ecr 1399004370], length 517\r\n12:14:06.529641 IP vcenter.mojedomena.local.https &gt; 192.168.250.11.59648: Flags [.], ack 518, win 118, options [nop,nop,TS val 1399004374 ecr 3900574729], length 0\r\n12:14:06.543649 IP vcenter.mojedomena.local.https &gt; 192.168.250.11.59648: Flags [P.], seq 1449:1889, ack 518, win 118, options [nop,nop,TS val 1399004388 ecr 3900574729], length 440\r\n12:14:06.543780 IP 192.168.250.11.59648 &gt; vcenter.mojedomena.local.https: Flags [.], ack 1, win 502, options [nop,nop,TS val 3900574745 ecr 1399004374,nop,nop,sack 1 {1449:1889}], length 0\r\n12:14:38.379354 IP 192.168.250.11.59648 &gt; vcenter.mojedomena.local.https: Flags [F.], seq 518, ack 1, win 502, options [nop,nop,TS val 3900606580 ecr 1399004374,nop,nop,sack 1 {1449:1889}], length 0\r\n12:14:38.380200 IP vcenter.mojedomena.local.https &gt; 192.168.250.11.59648: Flags [F.], seq 1889, ack 519, win 118, options [nop,nop,TS val 1399036224 ecr 3900606580], length 0\r\n12:14:38.380357 IP 192.168.250.11.59648 &gt; vcenter.mojedomena.local.https: Flags [R], seq 2119066285, win 0, length 0<\/pre>\n<p>D\u016fvod jsem zv\u00fdraznil a je hned v prvn\u00ed \u0159\u00e1dku. P\u0159i sestavov\u00e1n\u00ed TCP session klient na za\u010d\u00e1tku ozn\u00e1m\u00ed, jak\u00e9 MTU m\u016f\u017ee pos\u00edlat a p\u0159ij\u00edmat. Toto provede jen na z\u00e1klad\u011b nastaven\u00ed na s\u00ed\u0165ov\u00e9 kart\u011b\/OS. Bez kontroly po cest\u011b.<\/p>\n<p>Jakto, \u017ee ICMP a HTTP pro\u0161lo a probl\u00e9m byl a\u017e s HTTPS?<\/p>\n<p>ICMP a HTTP pos\u00edlaj\u00ed relativn\u011b mal\u00e9 pakety, kter\u00e9 se do 1400B okna vejdou bez probl\u00e9m\u016f &#8211; nezkou\u0161el jsem p\u0159ij\u00edmat obr\u00e1zky, tam by se to asi projevilo, ale testujte obr\u00e1zek z CLI \ud83d\ude42<\/p>\n<p>HTTPS chce hned na za\u010d\u00e1tku komunikace pos\u00edlat Certifik\u00e1t a ten se u\u017e rozd\u011bl\u00ed do velikosti 1460 B jak je na za\u010d\u00e1tku TCP sync, ale kdy\u017e se do toho zakomponuje Geneve, kter\u00fd m\u00e1 s\u00e1m o sob\u011b minim\u00e1ln\u011b 54B re\u017eii, tak se dost\u00e1v\u00e1me mimo 1500 MTU nastaven\u00e9 na routeru a ten to zahazoval.<\/p>\n<p>Tak\u017ee kontrola ICMP paketu 1600+ v\u0161e prozradila, p\u0159enastaveno na jumbo 9126 a voila, v\u0161e za\u010dalo fungovat.<\/p>\n<p>Tak\u017ee Always DNS a kontrolujte si MTU. Opravdu na tom z\u00e1le\u017e\u00ed a nen\u00ed to jen n\u011bco, co si s\u00ed\u0165a\u0159i vym\u00fd\u0161l\u00ed.<\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>Ned\u00e1vno jsem u jednoho z\u00e1kazn\u00edka \u0159e\u0161il nasazen\u00ed Supervisor clusteru. Celkem trivi\u00e1ln\u00ed v\u011bc, \u0159eknete se si, ale narazili jsme na n\u011bkolik v\u011bc\u00ed, kter\u00e9 zat\u00edm nebyly pln\u011b<\/p>\n<div class=\"more-link-wrapper\"><a class=\"more-link\" href=\"https:\/\/virtualguru.cz\/en\/2026\/01\/26\/proc-na-dns-a-mtu-skutecne-zalezi-supervisor-story\/\">Continue Reading<span class=\"screen-reader-text\">Pro\u010d na DNS a MTU skute\u010dn\u011b z\u00e1le\u017e\u00ed &#8211; Supervisor Story<\/span> <i class=\"fas fa-angle-right\"><\/i><\/a><\/div>","protected":false},"author":4,"featured_media":1895,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"iawp_total_views":88,"footnotes":""},"categories":[51,50,64,3],"tags":[87,43,52,88,59,11,7,9],"class_list":["post-1888","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-k8s","category-nsx","category-vcf","category-vsphere","tag-dns","tag-failed","tag-k8s","tag-mtu","tag-nsx","tag-troubleshooting","tag-vcenter","tag-vsphere","entry"],"_links":{"self":[{"href":"https:\/\/virtualguru.cz\/en\/wp-json\/wp\/v2\/posts\/1888","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/virtualguru.cz\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/virtualguru.cz\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/virtualguru.cz\/en\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/virtualguru.cz\/en\/wp-json\/wp\/v2\/comments?post=1888"}],"version-history":[{"count":6,"href":"https:\/\/virtualguru.cz\/en\/wp-json\/wp\/v2\/posts\/1888\/revisions"}],"predecessor-version":[{"id":1897,"href":"https:\/\/virtualguru.cz\/en\/wp-json\/wp\/v2\/posts\/1888\/revisions\/1897"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/virtualguru.cz\/en\/wp-json\/wp\/v2\/media\/1895"}],"wp:attachment":[{"href":"https:\/\/virtualguru.cz\/en\/wp-json\/wp\/v2\/media?parent=1888"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/virtualguru.cz\/en\/wp-json\/wp\/v2\/categories?post=1888"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/virtualguru.cz\/en\/wp-json\/wp\/v2\/tags?post=1888"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}