Hallo ich habe folgendes PRoblem :

docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset: disabled)
Active: failed (Result: exit-code) since Thu 2023-01-26 06:10:52 CET; 2h 23min ago
TriggeredBy: × docker.socket
Docs: https://docs.docker.com
Process: 1261 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=1/FAILURE)
Main PID: 1261 (code=exited, status=1/FAILURE)
CPU: 215ms

Jan 26 06:10:52 laptop-sebastian systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Jan 26 06:10:52 laptop-sebastian systemd[1]: Stopped Docker Application Container Engine.
Jan 26 06:10:52 laptop-sebastian systemd[1]: docker.service: Start request repeated too quickly.
Jan 26 06:10:52 laptop-sebastian systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 26 06:10:52 laptop-sebastian systemd[1]: Failed to start Docker Application Container Engine.`

Wenn ich nun versuche den Docker Service zu starten bekommen ich folgenden Fehler : 
`[sebastian@laptop-sebastian ~]$ sudo systemctl start docker.service
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

Auszug aus dem Journal Log :

[sebastian@laptop-sebastian ~]$ journalctl -xeu docker.service
░░ 
░░ The job identifier is 2998 and the job result is failed.
Jan 26 08:36:53 laptop-sebastian systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
░░ Subject: Automatic restarting of a unit has been scheduled
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ Automatic restarting of the unit docker.service has been scheduled, as the result for
░░ the configured Restart= setting for the unit.
Jan 26 08:36:53 laptop-sebastian systemd[1]: Stopped Docker Application Container Engine.
░░ Subject: A stop job for unit docker.service has finished
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ A stop job for unit docker.service has finished.
░░ 
░░ The job identifier is 3095 and the job result is done.
Jan 26 08:36:53 laptop-sebastian systemd[1]: docker.service: Start request repeated too quickly.
Jan 26 08:36:53 laptop-sebastian systemd[1]: docker.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ The unit docker.service has entered the 'failed' state with result 'exit-code'.
Jan 26 08:36:53 laptop-sebastian systemd[1]: Failed to start Docker Application Container Engine.
░░ Subject: A start job for unit docker.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ A start job for unit docker.service has finished with a failure.
░░ 
░░ The job identifier is 3095 and the job result is failed.

Hat jemand eine Idee ?

Du könntest mal schauen, ob die Ausgabe von dockerd --debug als root noch etwas Interessantes liefert.

Hier die Ausgabe :

$ sudo dockerd --debug
INFO[2023-01-27T19:37:46.696656458+01:00] Starting up                                  
DEBU[2023-01-27T19:37:46.697539685+01:00] Listener created for HTTP on unix (/var/run/docker.sock) 
DEBU[2023-01-27T19:37:46.697556447+01:00] Containerd not running, starting daemon managed containerd 
INFO[2023-01-27T19:37:46.700622875+01:00] libcontainerd: started new containerd process  pid=196293
INFO[2023-01-27T19:37:46.700664850+01:00] parsed scheme: "unix"                         module=grpc
INFO[2023-01-27T19:37:46.700674838+01:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2023-01-27T19:37:46.700692089+01:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2023-01-27T19:37:46.700702007+01:00] ClientConn switching balancer to "pick_first"  module=grpc
WARN[0000] containerd config version `1` has been deprecated and will be removed in containerd v2.0, please switch to version `2`, see https://github.com/containerd/containerd/blob/main/docs/PLUGINS.md#version-header 
INFO[2023-01-27T19:37:46.791241792+01:00] starting containerd                           revision=5b842e528e99d4d4c1686467debf2bd4b88ecd86.m version=v1.6.15
INFO[2023-01-27T19:37:46.802976215+01:00] loading plugin "io.containerd.content.v1.content"...  type=io.containerd.content.v1
INFO[2023-01-27T19:37:46.803034324+01:00] loading plugin "io.containerd.snapshotter.v1.aufs"...  type=io.containerd.snapshotter.v1
INFO[2023-01-27T19:37:46.808440122+01:00] skip loading plugin "io.containerd.snapshotter.v1.aufs"...  error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.8-arch1-1\\n\"): skip plugin" type=io.containerd.snapshotter.v1
INFO[2023-01-27T19:37:46.808877335+01:00] loading plugin "io.containerd.snapshotter.v1.btrfs"...  type=io.containerd.snapshotter.v1
INFO[2023-01-27T19:37:46.809093567+01:00] skip loading plugin "io.containerd.snapshotter.v1.btrfs"...  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2023-01-27T19:37:46.809120247+01:00] loading plugin "io.containerd.snapshotter.v1.devmapper"...  type=io.containerd.snapshotter.v1
WARN[2023-01-27T19:37:46.809140850+01:00] failed to load plugin io.containerd.snapshotter.v1.devmapper  error="devmapper not configured"
INFO[2023-01-27T19:37:46.809156355+01:00] loading plugin "io.containerd.snapshotter.v1.native"...  type=io.containerd.snapshotter.v1
INFO[2023-01-27T19:37:46.809190718+01:00] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  type=io.containerd.snapshotter.v1
INFO[2023-01-27T19:37:46.809427134+01:00] loading plugin "io.containerd.snapshotter.v1.zfs"...  type=io.containerd.snapshotter.v1
INFO[2023-01-27T19:37:46.810234861+01:00] skip loading plugin "io.containerd.snapshotter.v1.zfs"...  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2023-01-27T19:37:46.810276766+01:00] loading plugin "io.containerd.metadata.v1.bolt"...  type=io.containerd.metadata.v1
WARN[2023-01-27T19:37:46.810308265+01:00] could not use snapshotter devmapper in metadata plugin  error="devmapper not configured"
INFO[2023-01-27T19:37:46.810327332+01:00] metadata content store policy set             policy=shared
INFO[2023-01-27T19:37:46.811356180+01:00] loading plugin "io.containerd.differ.v1.walking"...  type=io.containerd.differ.v1
INFO[2023-01-27T19:37:46.811375806+01:00] loading plugin "io.containerd.event.v1.exchange"...  type=io.containerd.event.v1
INFO[2023-01-27T19:37:46.811386701+01:00] loading plugin "io.containerd.gc.v1.scheduler"...  type=io.containerd.gc.v1
INFO[2023-01-27T19:37:46.811420086+01:00] loading plugin "io.containerd.service.v1.introspection-service"...  type=io.containerd.service.v1
INFO[2023-01-27T19:37:46.811432029+01:00] loading plugin "io.containerd.service.v1.containers-service"...  type=io.containerd.service.v1
INFO[2023-01-27T19:37:46.811444391+01:00] loading plugin "io.containerd.service.v1.content-service"...  type=io.containerd.service.v1
INFO[2023-01-27T19:37:46.811460594+01:00] loading plugin "io.containerd.service.v1.diff-service"...  type=io.containerd.service.v1
INFO[2023-01-27T19:37:46.811475471+01:00] loading plugin "io.containerd.service.v1.images-service"...  type=io.containerd.service.v1
INFO[2023-01-27T19:37:46.811487344+01:00] loading plugin "io.containerd.service.v1.leases-service"...  type=io.containerd.service.v1
INFO[2023-01-27T19:37:46.811498379+01:00] loading plugin "io.containerd.service.v1.namespaces-service"...  type=io.containerd.service.v1
INFO[2023-01-27T19:37:46.811512557+01:00] loading plugin "io.containerd.service.v1.snapshots-service"...  type=io.containerd.service.v1
INFO[2023-01-27T19:37:46.811524360+01:00] loading plugin "io.containerd.runtime.v1.linux"...  type=io.containerd.runtime.v1
INFO[2023-01-27T19:37:46.811576183+01:00] loading plugin "io.containerd.runtime.v2.task"...  type=io.containerd.runtime.v2
INFO[2023-01-27T19:37:46.811604330+01:00] loading plugin "io.containerd.monitor.v1.cgroups"...  type=io.containerd.monitor.v1
INFO[2023-01-27T19:37:46.812138135+01:00] loading plugin "io.containerd.service.v1.tasks-service"...  type=io.containerd.service.v1
DEBU[2023-01-27T19:37:46.812157202+01:00] No RDT config file specified, RDT not configured 
INFO[2023-01-27T19:37:46.812168167+01:00] loading plugin "io.containerd.grpc.v1.introspection"...  type=io.containerd.grpc.v1
INFO[2023-01-27T19:37:46.812181507+01:00] loading plugin "io.containerd.internal.v1.restart"...  type=io.containerd.internal.v1
INFO[2023-01-27T19:37:46.812383072+01:00] loading plugin "io.containerd.grpc.v1.containers"...  type=io.containerd.grpc.v1
INFO[2023-01-27T19:37:46.812399904+01:00] loading plugin "io.containerd.grpc.v1.content"...  type=io.containerd.grpc.v1
INFO[2023-01-27T19:37:46.812411219+01:00] loading plugin "io.containerd.grpc.v1.diff"...  type=io.containerd.grpc.v1
INFO[2023-01-27T19:37:46.812421695+01:00] loading plugin "io.containerd.grpc.v1.events"...  type=io.containerd.grpc.v1
INFO[2023-01-27T19:37:46.812433219+01:00] loading plugin "io.containerd.grpc.v1.healthcheck"...  type=io.containerd.grpc.v1
INFO[2023-01-27T19:37:46.812445581+01:00] loading plugin "io.containerd.grpc.v1.images"...  type=io.containerd.grpc.v1
INFO[2023-01-27T19:37:46.812456616+01:00] loading plugin "io.containerd.grpc.v1.leases"...  type=io.containerd.grpc.v1
INFO[2023-01-27T19:37:46.812471283+01:00] loading plugin "io.containerd.grpc.v1.namespaces"...  type=io.containerd.grpc.v1
INFO[2023-01-27T19:37:46.812485112+01:00] loading plugin "io.containerd.internal.v1.opt"...  type=io.containerd.internal.v1
INFO[2023-01-27T19:37:46.812511372+01:00] loading plugin "io.containerd.grpc.v1.snapshots"...  type=io.containerd.grpc.v1
INFO[2023-01-27T19:37:46.812522477+01:00] loading plugin "io.containerd.grpc.v1.tasks"...  type=io.containerd.grpc.v1
INFO[2023-01-27T19:37:46.812543779+01:00] loading plugin "io.containerd.grpc.v1.version"...  type=io.containerd.grpc.v1
INFO[2023-01-27T19:37:46.812559215+01:00] loading plugin "io.containerd.tracing.processor.v1.otlp"...  type=io.containerd.tracing.processor.v1
INFO[2023-01-27T19:37:46.812571367+01:00] skip loading plugin "io.containerd.tracing.processor.v1.otlp"...  error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
INFO[2023-01-27T19:37:46.812581774+01:00] loading plugin "io.containerd.internal.v1.tracing"...  type=io.containerd.internal.v1
ERRO[2023-01-27T19:37:46.812597977+01:00] failed to initialize a tracing processor "otlp"  error="no OpenTelemetry endpoint: skip plugin"
INFO[2023-01-27T19:37:46.813699531+01:00] serving...                                    address=/var/run/docker/containerd/containerd-debug.sock
INFO[2023-01-27T19:37:46.813735360+01:00] serving...                                    address=/var/run/docker/containerd/containerd.sock.ttrpc
INFO[2023-01-27T19:37:46.813761062+01:00] serving...                                    address=/var/run/docker/containerd/containerd.sock
DEBU[2023-01-27T19:37:46.813772167+01:00] sd notification                               error="<nil>" notified=false state="READY=1"
INFO[2023-01-27T19:37:46.813794027+01:00] containerd successfully booted in 0.023689s  
DEBU[2023-01-27T19:37:46.819368984+01:00] Created containerd monitoring client          address=/var/run/docker/containerd/containerd.sock
DEBU[2023-01-27T19:37:46.821215266+01:00] Started daemon managed containerd            
DEBU[2023-01-27T19:37:46.822850415+01:00] Golang's threads limit set to 110070         
INFO[2023-01-27T19:37:46.823172249+01:00] parsed scheme: "unix"                         module=grpc
INFO[2023-01-27T19:37:46.823195157+01:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2023-01-27T19:37:46.823217158+01:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2023-01-27T19:37:46.823235177+01:00] ClientConn switching balancer to "pick_first"  module=grpc
DEBU[2023-01-27T19:37:46.823257736+01:00] metrics API listening on /var/run/docker/metrics.sock 
INFO[2023-01-27T19:37:46.825014900+01:00] parsed scheme: "unix"                         module=grpc
INFO[2023-01-27T19:37:46.825029986+01:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2023-01-27T19:37:46.825043465+01:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2023-01-27T19:37:46.825051008+01:00] ClientConn switching balancer to "pick_first"  module=grpc
DEBU[2023-01-27T19:37:46.826413842+01:00] Using default logging driver json-file       
DEBU[2023-01-27T19:37:46.826898548+01:00] [graphdriver] priority list: [btrfs zfs overlay2 fuse-overlayfs aufs overlay devicemapper vfs] 
DEBU[2023-01-27T19:37:46.847586551+01:00] processing event stream                       module=libcontainerd namespace=plugins.moby
DEBU[2023-01-27T19:37:46.877210744+01:00] backingFs=extfs, projectQuotaSupported=false, indexOff="index=off,", userxattr=""  storage-driver=overlay2
INFO[2023-01-27T19:37:46.877254047+01:00] [graphdriver] using prior storage driver: overlay2 
DEBU[2023-01-27T19:37:46.877273952+01:00] Initialized graph driver overlay2            
DEBU[2023-01-27T19:37:46.878151870+01:00] No quota support for local volumes in /var/lib/docker/volumes: Filesystem does not support, or has not enabled quotas 
DEBU[2023-01-27T19:37:46.881555218+01:00] Max Concurrent Downloads: 3                  
DEBU[2023-01-27T19:37:46.881567021+01:00] Max Concurrent Uploads: 5                    
DEBU[2023-01-27T19:37:46.881572260+01:00] Max Download Attempts: 5                     
INFO[2023-01-27T19:37:46.881589511+01:00] Loading containers: start.                   
DEBU[2023-01-27T19:37:46.881942005+01:00] processing event stream                       module=libcontainerd namespace=moby
DEBU[2023-01-27T19:37:46.881964564+01:00] Option Experimental: false                   
DEBU[2023-01-27T19:37:46.882025327+01:00] Option DefaultDriver: bridge                 
DEBU[2023-01-27T19:37:46.882040273+01:00] Option DefaultNetwork: bridge                
DEBU[2023-01-27T19:37:46.882057804+01:00] Network Control Plane MTU: 1500              
DEBU[2023-01-27T19:37:46.888624591+01:00] /usr/bin/iptables, [--wait -t filter -C FORWARD -j DOCKER-ISOLATION] 
DEBU[2023-01-27T19:37:46.889540434+01:00] /usr/bin/iptables, [--wait -t nat -D PREROUTING -m addrtype --dst-type LOCAL -j DOCKER] 
DEBU[2023-01-27T19:37:46.891281185+01:00] /usr/bin/iptables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL ! --dst 127.0.0.0/8 -j DOCKER] 
DEBU[2023-01-27T19:37:46.892086327+01:00] /usr/bin/iptables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL -j DOCKER] 
DEBU[2023-01-27T19:37:46.892840485+01:00] /usr/bin/iptables, [--wait -t nat -D PREROUTING] 
DEBU[2023-01-27T19:37:46.893418011+01:00] /usr/bin/iptables, [--wait -t nat -D OUTPUT] 
DEBU[2023-01-27T19:37:46.893974026+01:00] /usr/bin/iptables, [--wait -t nat -F DOCKER] 
DEBU[2023-01-27T19:37:46.894527387+01:00] /usr/bin/iptables, [--wait -t nat -X DOCKER] 
DEBU[2023-01-27T19:37:46.895062520+01:00] /usr/bin/iptables, [--wait -t filter -F DOCKER] 
DEBU[2023-01-27T19:37:46.895621049+01:00] /usr/bin/iptables, [--wait -t filter -X DOCKER] 
DEBU[2023-01-27T19:37:46.896129431+01:00] /usr/bin/iptables, [--wait -t filter -F DOCKER-ISOLATION-STAGE-1] 
DEBU[2023-01-27T19:37:46.896696691+01:00] /usr/bin/iptables, [--wait -t filter -X DOCKER-ISOLATION-STAGE-1] 
DEBU[2023-01-27T19:37:46.897191175+01:00] /usr/bin/iptables, [--wait -t filter -F DOCKER-ISOLATION-STAGE-2] 
DEBU[2023-01-27T19:37:46.897708427+01:00] /usr/bin/iptables, [--wait -t filter -X DOCKER-ISOLATION-STAGE-2] 
DEBU[2023-01-27T19:37:46.898215692+01:00] /usr/bin/iptables, [--wait -t filter -F DOCKER-ISOLATION] 
DEBU[2023-01-27T19:37:46.898719046+01:00] /usr/bin/iptables, [--wait -t filter -X DOCKER-ISOLATION] 
DEBU[2023-01-27T19:37:46.899250337+01:00] /usr/bin/iptables, [--wait -t nat -n -L DOCKER] 
DEBU[2023-01-27T19:37:46.899749780+01:00] /usr/bin/iptables, [--wait -t nat -N DOCKER] 
DEBU[2023-01-27T19:37:46.900276321+01:00] /usr/bin/iptables, [--wait -t filter -n -L DOCKER] 
DEBU[2023-01-27T19:37:46.900828704+01:00] /usr/bin/iptables, [--wait -t filter -n -L DOCKER-ISOLATION-STAGE-1] 
DEBU[2023-01-27T19:37:46.901724432+01:00] /usr/bin/iptables, [--wait -t filter -n -L DOCKER-ISOLATION-STAGE-2] 
DEBU[2023-01-27T19:37:46.902403510+01:00] /usr/bin/iptables, [--wait -t filter -N DOCKER-ISOLATION-STAGE-2] 
DEBU[2023-01-27T19:37:46.903039005+01:00] /usr/bin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-1 -j RETURN] 
DEBU[2023-01-27T19:37:46.903608360+01:00] /usr/bin/iptables, [--wait -A DOCKER-ISOLATION-STAGE-1 -j RETURN] 
DEBU[2023-01-27T19:37:46.904198318+01:00] /usr/bin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-2 -j RETURN] 
DEBU[2023-01-27T19:37:46.905266837+01:00] /usr/bin/iptables, [--wait -A DOCKER-ISOLATION-STAGE-2 -j RETURN] 
DEBU[2023-01-27T19:37:46.912187306+01:00] daemon configured with a 15 seconds minimum shutdown timeout 
DEBU[2023-01-27T19:37:46.912211192+01:00] start clean shutdown of all containers with a 15 seconds timeout... 
DEBU[2023-01-27T19:37:46.912241713+01:00] found 0 orphan layers                        
DEBU[2023-01-27T19:37:46.912587083+01:00] Cleaning up old mountid : start.             
INFO[2023-01-27T19:37:46.912709377+01:00] stopping event stream following graceful shutdown  error="<nil>" module=libcontainerd namespace=moby
DEBU[2023-01-27T19:37:46.912862961+01:00] Cleaning up old mountid : done.              
INFO[2023-01-27T19:37:46.913214687+01:00] stopping healthcheck following graceful shutdown  module=libcontainerd
INFO[2023-01-27T19:37:46.913253729+01:00] stopping event stream following graceful shutdown  error="context canceled" module=libcontainerd namespace=plugins.moby
DEBU[2023-01-27T19:37:46.913385451+01:00] received signal                               signal=terminated
DEBU[2023-01-27T19:37:46.913455643+01:00] sd notification                               error="<nil>" notified=false state="STOPPING=1"
failed to start daemon: Error initializing network controller: invalid CIDR address: 172.26.0/16
  • krisz hat auf diesen Beitrag geantwortet.

    8u3631984 failed to start daemon: Error initializing network controller: invalid CIDR address: 172.26.0/16

    Vermutlich findest du den Fehler in /etc/docker/daemon.json oder falls du deine docker.service geändert hast in der entsprechenden Drop-in-Datei.

    Der Hinweis war gut. Ich habe die Datei gelöscht und den Service neu gestartet nun geht es

    • krisz hat auf diesen Beitrag geantwortet.

      8u3631984
      Die Holzhammer-Methode also 😃 aber wenn es für dich funktioniert, ist es okay.

      Wie du deinen Beitrag als gelöst markieren kannst, findest du in den Foren-FAQs.