
The lightning package on PyPI — the high-level PyTorch framework powering ML training pipelines across the globe — was compromised in an active supply chain attack on April 30, 2026. Versions 2.6.2 and 2.6.3 were flagged as malicious by Socket, Aikido Security, OX Security, and StepSecurity. Version 2.6.1, published on January 30, 2026, remains the last known clean baseline.
With hundreds of thousands of daily downloads and millions of monthly installations, lightning is a cornerstone of Python-based AI and machine learning workflows.This wasn’t a hit on some obscure library. This was a precision strike at the beating heart of the AI development ecosystem.
The Attack Architecture: Import-Time Execution, Multi-Stage Payload
The compromise is surgical. The malicious code is injected directly into __init__.py — the file that executes the moment you import the package. A background thread is spawned before any legitimate Lightning code loads, executing start.py, a cross-platform Bun bootstrapper that detects the victim’s OS and architecture, downloads the Bun JavaScript runtime v1.3.13, and launches router_runtime.js — an 11 MB obfuscated JavaScript payload.
The malicious package includes a hidden _runtime directory containing the downloader and the obfuscated JavaScript payload. The use of Bun as a cross-platform execution engine is deliberate — it avoids Python-native detection signatures and bridges the gap between PyPI-delivered malware and JavaScript-native credential harvesting tooling.
What gets stolen:
SSH keys, shell histories (bash, zsh, Python, Node, MySQL, psql), .env files, git credentials, AWS/GCP/Azure credentials, Kubernetes and Helm configs, Docker credentials, npm tokens, MCP configs, and cryptocurrency wallets — all exfiltrated silently in the background while the developer’s training job runs in the foreground.
Exfiltration Mechanics: The Dead-Drop Technique
The C2 infrastructure shows adversarial sophistication. Stolen data is immediately POSTed to an attacker-controlled server over port 443, with the domain and path stored as encrypted strings inside the payload to hinder static analysis. In parallel, the malware polls the GitHub commit search API for commit messages prefixed with EveryBoiWeBuildIsAWormyBoi, which carry double-base64-encoded tokens. Once decoded, an Octokit client is authenticated for further operations. A new public GitHub repository is then created with a randomly chosen Dune-word name and the description “A Mini Shai-Hulud has Appeared,” with stolen credentials committed as timestamped JSON results.
This dead-drop architecture — leveraging GitHub’s own commit search API as a covert channel — is a textbook operational security play. The attacker never needs to expose a C2 domain directly. Every piece of infrastructure is borrowed from trusted platforms.
Persistence: First Documented Abuse of Claude Code’s Hook System
Here’s where this incident crosses from “credential theft” into “architectural violation.”
Once inside a repository, the malware plants persistence hooks targeting Claude Code and VS Code. For Claude Code, it writes a SessionStart hook with matcher: "*" into .claude/settings.json, pointing to node .vscode/setup.mjs — which fires every time a developer opens Claude Code in the infected repository, requiring no tool use or user action beyond launching the session. A parallel hook targets VS Code users via a runOn: folderOpen task in .vscode/tasks.json.
This may be among the first documented instances of malware abusing Claude Code’s hook system in a real-world attack.
The implication is significant: as AI coding assistants become embedded in developer workflows, their configuration and hook systems become a new persistence attack surface. The adversary is evolving faster than the tooling security model.
GitHub Account Compromise and Maintainer Suppression
A community member first flagged the compromise in Lightning-AI’s GitHub repository under issue #21689, titled “Possible supply chain attack on version 2.6.3.” That issue was closed without public explanation. When Socket subsequently opened a follow-up warning issue in the pytorch-lightning repository, it was closed within one minute by the pl-ghost account, which then posted a “SILENCE DEVELOPER” meme in the thread — strongly indicating the project’s GitHub account is itself compromised.
The attacker didn’t just own the PyPI release pipeline. They owned the GitHub account, closed the incident disclosure, and mocked the security community doing the right thing. That is not an opportunistic compromise — that is a deliberate, coordinated operation.
TeamPCP Attribution: The Escalating Campaign
This attack is consistent with Team PCP’s escalating open-source supply chain campaign, which previously compromised LiteLLM (March 24, 2026), Telnyx (March 27, 2026), Bitwarden CLI, and Xinference in rapid succession.
The campaign is assessed to be an extension of the Mini Shai-Hulud supply chain incident that targeted SAP-related npm packages. The group also called LAPSUS$ “a good partner of ours” that “has been involved heavily throughout this entire operation,” and emphasized that it operates CipherForce — its own private locker.
The cross-registry pivot — from npm (Bitwarden, SAP) to PyPI (Lightning) — confirms TeamPCP is not a single-platform threat actor. They are executing a systematic campaign across every major package registry that developer toolchains depend on.
An attacker-posted Tor onion link appeared in the GitHub issue thread pointing to a Team PCP-branded site with a PGP-signed message claiming LAPSUS$ involvement. Socket has not independently verified this attribution and is investigating whether the branding reflects true attribution, opportunistic association, or a deliberate false flag.
The LAPSUS$ claim should be treated with analytical skepticism — false-flag attribution is a well-documented technique in supply chain operations to muddy forensic trails.
Detection: Speed as Defense
Socket’s AI scanner flagged both versions 2.6.2 and 2.6.3 as malicious just 18 minutes after publication on April 30, 2026. PyPI administrators subsequently quarantined the project.
Eighteen minutes. That is the window between publication and detection. For any CI/CD pipeline with automated dependency updates, eighteen minutes is well within the blast radius.
Practitioner Remediation
If your environment has lightning installed, the response is not optional:
Immediate:
- Block and remove versions 2.6.2 and 2.6.3 from all developer machines, CI/CD pipelines, and build environments
- Downgrade to version 2.6.1 — the last confirmed clean baseline
- Treat any system that imported the compromised versions as fully compromised
Credential Rotation:
- Rotate all GitHub tokens, npm tokens, AWS/GCP/Azure credentials, and SSH keys present in affected environments
- Audit
.envfiles, Kubernetes secrets, Docker credentials, and MCP configuration files for unauthorized access
CI/CD Audit:
- Review GitHub repositories for unauthorized commits, particularly looking for results JSON files with base64-encoded content or repos with “A Mini Shai-Hulud has Appeared” in the description
- Audit
git logacross all repositories accessible from affected build systems
Developer Tooling:
- Inspect
.claude/settings.jsonfor unauthorizedSessionStarthooks - Inspect
.vscode/tasks.jsonfor unauthorizedrunOn: folderOpentasks - Review
.vscode/setup.mjsand.claude/setup.mjsfor the Bun bootstrapper pattern
Systemic Controls:
- Implement dependency pinning with cryptographic hash verification — not just version pinning
- Evaluate software composition analysis tooling capable of behavioral detection, not just version-matching
- Consider import-time monitoring in Python environments for unexpected subprocess spawns during package load
The Bigger Picture
This incident exposes the fragility baked into the “just pip install it” culture of AI/ML development. The lightning package isn’t a small utility — it is infrastructure for how models get trained and deployed. Compromising it is not compromising a tool. It is compromising the pipeline that produces AI systems.
The attack surface of modern AI development now includes: the frameworks used to train models, the IDEs and coding assistants used to write training code, the CI/CD systems that run training jobs, and the cloud credentials that host the trained artifacts. TeamPCP’s Mini Shai-Hulud campaign is not stealing credentials. It is systematically mapping and monetizing the entire AI development supply chain.
The security model for open-source AI infrastructure needs to mature — rapidly — to match the adversarial sophistication now targeting it.


