Deploying a Self-Hosted Runner
A Self-Hosted Runner in Blink is a lightweight agent that you deploy within your own environment—whether it’s on-premises, in a private cloud, or inside a secure VPC. It extends Blink’s automation capabilities to systems that aren’t publicly accessible, allowing workflows to securely interact with internal services, databases, secret managers, and cloud resources behind firewalls.
Prerequisites
-
Container Engine: Runners in the Blink Platform must be deployed in an environment with a container engine. Examples of supported container engines are Docker and Kubernetes.
-
Blink User: You must have either the Owner role,Contributor role, or a custom role that includes both the
runners:view
andrunners:edit
permissions. To learn more about role-based access and permissions, see the User Roles documentation.
Network Requirements
Communication with Blink’s SaaS Server: Blink’s runner must be able to communicate with Blink’s SaaS server over port 443 (HTTPS).
Access to Blink’s Services: The runner’s network configuration should allow access to the following Blink services:
- US (aws)- https://app.blinkops.com/
- EU (aws)- https://ue1.blinkops.com/
- US(azure) -https://us2.blinkops.com/
Resource Access: Depending on the use case, Blink’s runner can be integrated with various cloud services. Make sure to allow network access to any internal or external services the runner needs to interact with.
Docker Hub Communication: Since Docker images are downloaded from Docker Hub, ensure that the runner has access to Docker Hub for successful image pulls.
Hardware Recommendations
It is recommended to have at least:
4GB
of RAM for the runner environment.40GB
of disk space for the runner environment, including runners and plugin images.- Running on an environment with two vCPUs.
Self-Hosted Runner Deployment Options
Tip: Use a Self-Hosted Runner for More Control and Security
Tip: Use a Self-Hosted Runner for More Control and Security
While Blink’s default cloud-hosted runner works well for many use cases, we strongly recommend deploying a Self-Hosted Runner when you need more control, flexibility, or integration with private infrastructure.
Self-Hosted Runners offer key advantages:
- Greater Flexibility – Fully customize the architecture, configuration, and execution environment to match your specific needs.
- Scalability – Allocate and scale resources based on your workload, without limitations from shared cloud infrastructure.
- Workload Optimization – Distribute tasks across your infrastructure to reduce latency and improve performance.
- Enhanced Security & Compliance – Keep sensitive data, secrets, and internal systems within your own environment to meet compliance and security requirements.
Common scenarios where a Self-Hosted Runner is recommended:
- Running actions in a private Kubernetes cluster with no public API access—deploy the runner inside the cluster to run commands locally.
- Accessing resources in a private subnetwork—place the runner in a public subnet that has routing access to internal systems.
- Minimizing data transfer costs—deploy the runner in a subnetwork that uses private endpoints (e.g., to an S3 bucket).
- Storing secrets in a self-hosted vault instead of Blink’s default connections vault.
- Store data with your own object storage instead of Blink’s default object storage.
- Reducing latency to regional or multi-cloud resources—deploy a runner closer to where your resources live for faster execution.
By using a self-hosted runner, you gain full control over how workflows run, interact with your infrastructure, and handle data—making it the ideal choice for production and enterprise-grade environments.
You can deploy a Runner in different environments based on your infrastructure and operational needs. Each deployment method comes with its own setup requirements and capabilities. Below are the supported deployment modes, along with notes and limitations to help you choose the right option best suited to your needs.
1. Docker
1. Docker
The Docker deployment mode allows the Blink Runner to use the host machine’s Docker socket to manage and execute plugins as isolated containers. This setup enables the Runner to dynamically create and manage plugin sessions on demand, making it ideal for local development or lightweight self-hosted environments.
Supported platforms include:
- Linux
- MacOS
2. Kubernetes
2. Kubernetes
The Kubernetes deployment mode is ideal for production environments that require scalability, isolation, and orchestration. Blink Runner can be deployed as a Helm chart into any compliant Kubernetes cluster, making it easy to integrate with existing infrastructure and CI/CD workflows.
Supported Kubernetes deployment stack include:
- Kubernetes engine - Version 1.19 or higher.
- Helm deployment - Versions 3 or higher.
Important Information
Important Information
-
When deploying Runner in Kubernetes, you need to set an
appKey
as an input value. This registration token is used an authentication for Blink.- If you want to manage this registration token in your Secret Manager, and you use kubernetes-external-secrets in your cluster, you can create an
ExternalSecret
resource for it before installing the Runner helm chart. The secret name should be set to blink-runner-secret .
- If you want to manage this registration token in your Secret Manager, and you use kubernetes-external-secrets in your cluster, you can create an
-
A default Kubernetes connection is a custom feature of the Blink runner on the Kubernetes platform. Users that install Blink’s runner on Kubernetes get a default connection to their namespace, which will help them use Blink’s Kubernetes integration. This connection has access to the namespace service account, giving the ability to control the namespace through Blink.
Known Limitations
Known Limitations
If you have Calico installed on your cluster, there is a known bug in Calico used in the Tigera operator manifest for 1.7
and 1.8.
For more details see the following articles:
3. EC2- CloudFormation
3. EC2- CloudFormation
The EC2 deployment mode enables you to launch a Blink Runner on an AWS EC2 instance using a pre-configured CloudFormation template.
Related Articles
Deploying a Runner with Kubernetes
Guide to installing, configuring and deploying a Runner using Kubernetes
Deploying a Runner with CloudFormation
Guide to installing, configuring and deploying a Runner using CloudFormation
Deploying a Runner with Docker
Guide to installing, configuring and deploying a Runner using Docker
Configuring a Runner Group
Deploy multiple on-prem Runners for high availability, parallel execution, or workload isolation.