5 Deadly Python Patterns for Agent Skills (Security Scan)
The "Convenience" Trap
When building Agent Skills or MCP Servers, developers often prioritize speed. "I just need the agent to run this script," you think. So you reach for the standard Python toolkit: os.system, eval, or subprocess.
In a traditional local script, these are fine. But inside an AI Agent Skill that accepts unpredictable input from an LLM? They are catastrophic.
At AgentSkillsHub.dev, we have scanned hundreds of skills. Here are the top 5 patterns that immediately trigger a Grade F rating.
1. The RCE Nightmare: eval() and exec()
The Pattern:
# DON'T DO THIS
user_math = "2 + 2"
result = eval(user_math)
The Risk:
LLMs are excellent at math; you don't need Python's eval() for that. If an attacker (or a hallucinating model) passes __import__('os').system('rm -rf /') into your skill, your agent executes it with full permissions.
The Fix: Use specialized libraries like ast.literal_eval() or dedicated math parsers. Never execute raw strings as code.
2. The Shell Injection: os.system()
The Pattern:
# DON'T DO THIS
filename = input_from_agent
os.system(f"cat {filename}")
The Risk:
This is classic Command Injection. If the agent sets filename to "file.txt; cat /etc/passwd", you just leaked your system users. Our scanner treats os.system as a critical vulnerability because it spawns a shell that interprets special characters.
The Fix: Use subprocess.run() with shell=False and pass arguments as a list.
3. The Silent Exfiltrator: Hardcoded Secrets
The Pattern:
api_key = "sk-12345abcdef..."
The Risk:
It seems obvious, but 15% of the skills we scan contain hardcoded API keys. When you publish this to GitHub, scraper bots find it in seconds. For Agent Skills, this is worse because the Agent might output the key to the user if asked "What is your configuration?"
The Fix: Always use environment variables (os.environ.get).
4. The Dependency Bomb: Malicious requirements.txt
The Pattern:
Installing requsests instead of requests.
The Risk:
Typosquatting is rampant in the AI engineering world. Malicious packages often pose as popular tools. If your skill auto-installs dependencies, you might be pulling in a backdoor.
The Fix: Pin exact versions in your requirements.txt and use a lock file.
5. The Blind Trust: Unbounded File Access
The Pattern:
with open(user_path, 'r') as f: ...
The Risk:
Path Traversal. If the agent provides ../../../../etc/shadow, your skill reads it. Agents usually run with the permissions of the user starting them.
The Fix: Validate that the path is within a specific "sandbox" directory before opening it.
Final Verdict
Security isn't just about preventing hacks; it's about Trust. If you want your Agent Skill to be installed by enterprise users, it needs to be clean.
Check your skill now with our free tool: One-Click Security Scanner