Docker containers provide the healthcheck feature, which can be scripted individually for every container.
The official use case is to check if the service(s) is/are responding and working correctly in the container.

But it is also important that your container is secure. It is never a good idea to create a container image once and run the image forever in a container. The best practice is that the container image is static, so that it provides reproducable results. You can say it is bad practice if you run a self-updating container, because you'll never be sure that it is a working container.

Good practice on the other side is to know when you have to update the container images. Here the container healthcheck can help.

Before you start your service process in the entrypoint script start a background process which periodically checks for software updates.
If you run an Alpine Linux container this could be done like this...

echo "$0: Starting apk update cron..."
APK_RENEW_SLEEP=79000 # plus $RANDOM about 86400 # 1 day
( nextsleep=0 ; \
    while [ "$(sleep $nextsleep ; echo 0)" -eq 0 ] ; do \
    nextsleep=$(($APK_RENEW_SLEEP+$RANDOM)) ; \
    touch "$APK_CRON_PID" ; \
    apk list --no-cache -u 2>&1 | grep -v "^fetch .*APKINDEX.tar.gz$" >"$APK_CRON_OUTPUT" 2>&1 ; \
    done ; \
    rm "$APK_CRON_PID" ; ) &
echo "$!" > "$APK_CRON_PID"
echo "$0: Started apk update cron"

Instead of the apk-command you could add any system package manager update command (apt, dnf, zypper, yum...) which outputs available updates in the output file.
And leave the file empty if there are no updates.

In your healthcheck script you can add these checks...


# apk update check
[ -s "$APK_CRON_OUTPUT" ] && {
    update_count="$(cat "$APK_CRON_OUTPUT" | wc -l)"
    echo "Update(s) $update_count available" >&2
    cat "$APK_CRON_OUTPUT"
    exit 1

# apk update cron pid check
[ -f "/proc/$(cat "$APK_CRON_PID")/status" ] || {
    echo "apk update cron proc not found" >&2
    exit 1

If your container is in status unhealthy you can now inspect the container and check what is the cause...
docker container inspect -f "{{ range .State.Health.Log }}[{{ .ExitCode }}] {{ .Output }}{{ end }}"