Goodbye Ansible? Experimenting with GPT-5.4 for DevOps Automation

Recently I’ve been testing some fascinating capabilities of GPT-5.4. One feature that particularly caught my attention is the ability to invoke system tools directly from a prompt—for example, executing commands in a shell environment.
Naturally, I wanted to see how far this could go. So I tried something slightly more ambitious: connecting to a remote server via SSH and performing real operations there.
And the result?
It works.
Running Shell Commands from a Prompt
The basic idea is simple. You instruct the model that it is allowed to execute shell commands via a dedicated tool. Then you describe the task you want to accomplish.
For example, checking what databases exist on a remote PostgreSQL server.
System Prompt
You can run shell commands using the shell tool.
Keep responses concise and include command output when helpful.
User Prompt
On the server root@xxxxx.io PostgreSQL is installed.
Check the available databases there.
Use the PostgreSQL account:
user: postgres
The password is stored in the environment variable POSTGRES_PASSWORD.
For remote access use SSH.
The location of the private SSH key is stored in the environment variable PRIVATE_KEY_PATH.
The password for this SSH key is stored in the environment variable PRIVATE_KEY_PASSWORD.
With this configuration, the model can:
- Establish an SSH connection.
- Authenticate using the provided key.
- Run
psqlremotely. - List the available databases.
In other words, the model effectively becomes a remote operations assistant.
So… Goodbye Ansible, Puppet and Friends?
This experiment made me think about the traditional infrastructure automation stack.
Tools like Ansible, Puppet, Chef, and others are powerful—but they also come with:
- their own DSLs and syntax
- version compatibility issues
- large configuration trees
- dozens (or hundreds) of YAML files
In comparison, describing the desired end state in plain language feels refreshingly simple.
Instead of writing playbooks, I can define a system prompt that turns the model into a DevOps automation agent.
Example: DevOps Automation Agent
System Prompt
You are a senior DevOps automation agent.
You can run shell commands using the shell tool.
Your task is to configure and deploy applications on remote Linux servers using SSH.
Rules:
1. Always connect using SSH when interacting with remote servers.
2. Use the private key located at $PRIVATE_KEY_PATH.
3. The SSH key password is stored in $PRIVATE_KEY_PASSWORD.
4. If a command requires a password, retrieve it from environment variables.
5. Always check system state before making changes (idempotent behavior).
6. Prefer safe automation patterns used in tools like Ansible or Terraform.
7. When installing packages, detect the OS first.
8. Output important command results.
9. If something fails, diagnose and retry with an alternative approach.
Typical workflow:
1. Connect via SSH
2. Detect OS
3. Install dependencies
4. Configure directories and users
5. Deploy application container
6. Configure system services
7. Validate deployment
Keep explanations minimal. Focus on executing commands and showing results.
Example Task: Deploy a Containerized Application
User Prompt
Deploy a containerized application to the remote server.
Remote server:
root@server.example.com
Connection rules:
Use SSH with the private key at $PRIVATE_KEY_PATH.
Password for the key is in $PRIVATE_KEY_PASSWORD.
Tasks:
1. Detect the OS and package manager.
2. Ensure Docker is installed.
3. Ensure Docker service is running.
4. Create directory /opt/myapp.
5. Create file /opt/myapp/.env with the following content:
APP_PORT=8080
LOG_LEVEL=info
ENVIRONMENT=production
6. Pull the Docker image:
ghcr.io/mycompany/myapp:latest
7. Stop and remove any existing container named "myapp".
8. Run the container:
docker run -d \
--name myapp \
-p 8080:8080 \
--env-file /opt/myapp/.env \
--restart unless-stopped \
ghcr.io/mycompany/myapp:latest
9. Verify the container is running.
10. Show docker ps and the last 20 container logs.
Use SSH for all remote commands.
Show command outputs.
If Docker is already installed, skip installation.
Why This Feels Different
The interesting part is not just that this works.
It’s how simple the interface becomes.
No playbooks.
No special DSL.
No dependency trees.
Just:
- a system prompt defining the operational rules
- a task description in natural language
That’s it.
Clear. Understandable. Easy to modify.
Final Thoughts
We may not be replacing Ansible or Terraform tomorrow—but this direction is extremely interesting.
LLMs are starting to behave less like text generators and more like operational agents capable of interacting with real infrastructure.
And when that happens, infrastructure automation may start looking very different from the YAML-heavy world we’re used to today.