AI Tool Caught Reading Passwords It Was Never Meant to See Insights Desk, January 29, 2026January 29, 2026 Claude Code, an AI coding tool developed by Anthropic, is being investigated after experts discovered that it can access, sensitive files that it should not have access to, and is posing security risks to developers. Claude Code is designed to help developers write and review software. Many developers store passwords, API keys, and security tokens in special files meant to stay private. These files are usually hidden using simple blocklists so tools cannot access them. However, testing shows that Claude Code can ‘access’ these sensitive files even after developers try to prevent it from accessing them. security researchers confirmed the issue by placing blocked files inside test projects and asking Claude to open them. The AI did so without restriction. The problem applies to files that contain credentials for logging into various systems and is especially sensitive because these secrets could enable malicious users to compromise access to databases, cloud platforms, or even paid subscriptions to SaaS tools. The problem is particularly concerning with fully-automated artificial intelligence tools that can be prompted in a hidden manner. In such situations, a malicious user could trick the AI into exposing the hidden secrets, all while the developer remains oblivious. Reports show this flaw has been raised multiple times in public issue trackers over the past few months. Despite repeated warnings, the problem remains unresolved. Some developers say the only reliable protection requires complex manual settings that are difficult to configure correctly. Anthropic has not responded publicly to requests for comment on the matter. Security experts warn that unclear guidance and inconsistent protections could put organizations at risk. The situation is still developing, and further updates are expected as pressure mounts for a fix. Security AI Securitydata privacySoftware Development