top of page

LBSocial

OpenClaw Skills: Guiding AI Agents for Data Analytics

Have you ever asked an AI agent to perform complex data analysis, only to watch it waste computing power on a generic text summary instead of writing actual code?


If you want to move beyond simple chatbots and build a truly autonomous AI assistant, your agent needs more than just a large language model brain. It needs Tools to execute actions and Skills to know exactly how to use those tools.


If you haven't set up your base agent yet, you will want to start with our guide on deploying OpenClaw as a 24/7 AI assistant on Google Cloud. Today, we are giving that agent a massive upgrade.


OpenClaw Skills: Guiding AI Agents for Data Analytics

The Overall System Architecture


To understand how we can enhance our agent, it's crucial to visualize the entire workflow from user request to final output. The OpenClaw platform integrates several services into a seamless automated pipeline.


Flowchart with Gemini AI processes user requests from Telegram via Python, sends results to GitHub. Features GitHub, Telegram icons.
The overall OpenClaw system architecture diagram

As shown in the diagram, this architecture operates in three distinct phases:


  1. The Input Interface (Telegram): This is where human interaction happens. You send your data analysis tasks, requests, and questions directly through Telegram, making it as easy as texting a colleague.

  2. The Processing Core (OpenClaw on Google Cloud): Hosted on a Google Cloud VM, this is the engine of the operation. The agent uses Gemini AI as its central brain for reasoning. However, to actually do the work, Gemini relies on Python (the execution tools) and structured SOPs (the skills/instructions) to process the task automatically.

  3. The Output & Collaboration (GitHub): Once the data is analyzed and the code is written, the agent doesn't just dump a massive wall of text back into Telegram. Instead, it pushes the final results—such as Jupyter notebooks, raw data, and .py scripts—directly to GitHub via Pull Requests. This allows you to review the code, track issues, and collaborate just like you would with a human teammate.


The Problem: Intelligence Without Execution


AI models like Gemini are incredibly smart, but out of the box, they can be unpredictable. When tasked with analyzing recent gas prices following a specific geopolitical event without proper instructions, an unprepared OpenClaw agent defaults to basic behavior. It searches the web and summarizes the news as a text briefing. It doesn't write any Python code, crunch any numbers, or generate visualizations.


It lacks the structure to perform actual data science.


Chat discussing recent gas price changes due to US-Iran conflict, highlighting retail oil surges, supply chain issues, and economic impacts.
The Telegram chat shows the AI agent returning a text-only briefing about gas prices instead of a data chart.



The Solution: Activating Tools + Skills


To fix this, we need to activate the two core components we highlighted in our architecture diagram earlier: Tools and Skills. Instead of relying solely on the LLM's reasoning to answer the prompt, we must explicitly provide it with a Python environment for code execution and the exact Standard Operating Procedure to follow.


As seen in the center of the architecture diagram, these two elements feed directly into the Gemini AI, giving it both the instructions (the SOP) and the physical capabilities (Python) to complete the task successfully.


Step 1: Equipping the Agent with Python Tools


Because we are using a Google Cloud VM, Python is actually already installed. However, to download specific data science libraries, our agent needs pip (the Python package manager) and support for virtual environments.


Depending on your specific settings and permissions, your AI agent might be able to install these tools on its own! But if it doesn't have the required root access—as was the case in our video demo—you can easily jump in and help your agent install them.


Simply log into your Google Cloud VM via SSH and run the following commands in your terminal:


1. Install Pip (Package Manager)

sudo apt update && sudo apt install python3-pip -y

2. Install Virtual Environment Support

sudo apt install python3-venv -y

Note: Using a virtual environment (venv) is highly recommended to keep the OpenClaw environment clean and avoid package conflicts.


Terminal screen showing installation of Python3 packages via apt. Text in green, yellow, white on black background. Commands and outputs visible.
 Terminal window showing the successful installation of python3-venv.

Once installed, the agent has the physical tools to execute Python code. Because we previously connected our agent to our repositories—which you can review in our OpenClaw GitHub AI teammate tutorial—the agent can now automatically push generated Python files and Jupyter notebooks directly to GitHub via a Pull Request.



Step 2: Building the Data Analytics Skill


Having tools is useless if the agent doesn't know the workflow. We use OpenClaw's built-in "Skill Creator" mode to draft a Markdown-based instruction set (SKILL.md) for our agent.


We instruct the agent that whenever it receives a data analysis request, it must strictly follow these five steps:


  1. Identify the data source.

  2. Download or collect the data.

  3. Use Python to process the data.

  4. Use a Jupyter Notebook to summarize and visualize the data.

  5. Upload the analysis (including .py files and notebooks) to GitHub via a Pull Request.


To review the skill structure that OpenClaw generates, you can navigate your workspace using these commands:


3. Navigate to the Skills Workspace

cd ~/.openclaw/workspace/skills

4. Explore the Specific Skill Folder

cd data-analysis && ls -R

5. Inspect the Skill Instructions (SOP)


cat SKILL.md

Terminal with commands and directory listing for a data analysis skill. Text describes workflow steps: identify source, download data, process.
Terminal view of the SKILL.md file showing the numbered Markdown instructions.

Step 3: Putting the Skill to Work


Now it is time to use the skill to analyze data. In Telegram, you can explicitly call the skill within your prompt to ensure the agent follows the correct SOP.


For example, we send this exact prompt:


"Use the data analysis skill to analyze the recent gas prices in the United States during the US-Iran airstrike."

Because the skill is active, the agent no longer defaults to a web search. Instead, it follows our 5-step SOP systematically. It identifies an open-source dataset, writes a Python script using pandas and yfinance to download and process the data, creates a Jupyter Notebook to map the statistics, and packages everything up.


Chat with a message about analyzing gas prices post-US-Iran airstrike. Options include APIs, Python libraries, or CSV files.
A Telegram chat showing the prompt that invokes the skill, followed by the agent's step-by-step technical response.

The Final Output: Actionable Intelligence


The agent's work isn't complete until the analysis is reviewable. Once the agent finishes executing the tools as instructed by our skill, it pushes the formatted project to our repository. Here is the resulting Jupyter Notebook, successfully visualizing the exact data points requested.


Line graph showing RBOB Gasoline Futures from Feb-Mar 2026. Blue line for daily close prices, orange dashed line for 3-day moving average.
The final output visualization from the video showing the Jupyter Notebook on GitHub with the generated gas price charts and tables


Step 4: An Ongoing Process (Refining Your Skills)


Building an AI agent is an ongoing process. You will rarely write the perfect skill on the first try, and that is completely fine.


After reviewing the output, you might realize you want the agent to handle API keys securely or format its GitHub uploads differently. You can simply invoke the OpenClaw Skill Creator again, tell it you want to update the existing "Data Analysis" skill, and add new constraints in natural language:


"When you download data, ask me if I have API keys and store them securely. Also, when uploading to GitHub, always use a Pull Request."

OpenClaw will automatically update the underlying SKILL.MD file with these new rules. By continuously updating your skills, you refine your agent's behavior, save processing tokens, and ensure it follows your exact data analytics workflow every time.

Comments


bottom of page