A Self-Hosted Runner in Blink is a lightweight agent that you deploy within your own environment—whether it’s on-premises, in a private cloud, or inside a secure VPC. It extends Blink’s automation capabilities to systems that aren’t publicly accessible, allowing workflows to securely interact with internal services, databases, secret managers, and cloud resources behind firewalls.
Container Engine: Runners in the Blink Platform must be deployed in an environment with a container engine. Examples of supported container engines are Docker and Kubernetes.
Note: Blink’s cloud services will never initiate communication with the self-deployed runner. all communication is initiated from the runner side.
Resource Access: Depending on the use case, Blink’s runner can be integrated with various cloud services. Make sure to allow network access to any internal or external services the runner needs to interact with.
Docker Hub Communication: Since Docker images are downloaded from Docker Hub, ensure that the runner has access to Docker Hub for successful image pulls.
While Blink’s default cloud-hosted runner works well for many use cases, we strongly recommend deploying a Self-Hosted Runner when you need more control, flexibility, or integration with private infrastructure.
Self-Hosted Runners offer key advantages:
Greater Flexibility – Fully customize the architecture, configuration, and execution environment to match your specific needs.
Scalability – Allocate and scale resources based on your workload, without limitations from shared cloud infrastructure.
Workload Optimization – Distribute tasks across your infrastructure to reduce latency and improve performance.
Enhanced Security & Compliance – Keep sensitive data, secrets, and internal systems within your own environment to meet compliance and security requirements.
Common scenarios where a Self-Hosted Runner is recommended:
Running actions in a private Kubernetes cluster with no public API access—deploy the runner inside the cluster to run commands locally.
Accessing resources in a private subnetwork—place the runner in a public subnet that has routing access to internal systems.
Minimizing data transfer costs—deploy the runner in a subnetwork that uses private endpoints (e.g., to an S3 bucket).
Storing secrets in a self-hosted vault instead of Blink’s default connections vault.
Store data with your own object storage instead of Blink’s default object storage.
Reducing latency to regional or multi-cloud resources—deploy a runner closer to where your resources live for faster execution.
By using a self-hosted runner, you gain full control over how workflows run, interact with your infrastructure, and handle data—making it the ideal choice for production and enterprise-grade environments.
You can deploy a Runner in different environments based on your infrastructure and operational needs. Each deployment method comes with its own setup requirements and capabilities. Below are the supported deployment modes, along with notes and limitations to help you choose the right option best suited to your needs.
The Docker deployment mode allows the Blink Runner to use the host machine’s Docker socket to manage and execute plugins as isolated containers. This setup enables the Runner to dynamically create and manage plugin sessions on demand, making it ideal for local development or lightweight self-hosted environments.
Supported platforms include:
Linux
MacOS
For a more extensive guide on how to install, configure and deploy a runner using Docker, navigate to the documentation here
The Kubernetes deployment mode is ideal for production environments that require scalability, isolation, and orchestration. Blink Runner can be deployed as a Helm chart into any compliant Kubernetes cluster, making it easy to integrate with existing infrastructure and CI/CD workflows.
Supported Kubernetes deployment stack include:
Kubernetes engine - Version 1.19 or higher.
Helm deployment - Versions 3 or higher.
For a more extensive guide on how to install, configure and deploy a runner using Kubernetes, navigate to the documentation here
When deploying Runner in Kubernetes, you need to set an appKey as an input value. This registration token is used an authentication for Blink.
If you want to manage this registration token in your Secret Manager, and you use kubernetes-external-secrets in your cluster, you can create an ExternalSecret resource for it before installing the Runner helm chart. The secret name should be set to blink-runner-secret .
A default Kubernetes connection is a custom feature of the Blink runner on the Kubernetes platform. Users that install Blink’s runner on Kubernetes get a default connection to their namespace, which will help them use Blink’s Kubernetes integration. This connection has access to the namespace service account, giving the ability to control the namespace through Blink.
If you have Calico installed on your cluster, there is a known bug in Calico used in the Tigera operator manifest for 1.7 and 1.8.