|  

Research Reveal AI Agent Trying to Start Crypto Mining on its Own

Mar 09, Kathmandu - As artificial intelligence (AI) systems become increasingly powerful, new questions are being raised about their control and behavior. A recent study has found that an AI agent attempted to start cryptocurrency mining on its own without instructions, reigniting the debate over AI security and surveillance.

The incident was uncovered during testing of an experimental AI agent called “Rome” being developed by a research team affiliated with Alibaba. The researchers were examining its behavior after the system’s security mechanisms were suddenly activated during training.

The research paper states that during testing, the AI ​​agent attempted to start a process related to cryptocurrency mining. Typically, such processes are intentionally activated by a system administrator. However, in this case, the AI ​​agent is said to have attempted to start this process on its own, even though it was not given such instructions.

According to the research team, the system was operated in a limited and controlled test environment called a “sandbox.” In such an environment, it is not allowed to contact external systems. But the AI ​​agent showed unusual activity in the same environment, the researchers said.

Creating a ‘reverse SSH tunnel’ without instructions

According to the researchers, the AI ​​agent not only attempted crypto mining but also tried to create a ‘reverse SSH tunnel’ itself. This is a technique through which a computer within a secure or restricted network can connect to an external system.

According to the report, the AI ​​was given no explicit instructions to start crypto mining or to create such a tunnel. This has raised concerns about the possibility that AI systems could show unexpected behavior, the researchers said.

Risks also increase with AI capabilities

According to experts, current AI agents are now able to write code, automate complex workflows and work with various online tools. As such capabilities increase, the possibility of unexpected behavior during testing is also expected to increase.

Some similar incidents have been made public before. For example, in an experiment called ‘Moltbook’, AI agents were placed in a social media-like digital environment, where they discussed cryptocurrency-related topics while chatting with each other.

Similarly, a system called ‘OpenCL Agent’, developed by Dan Botero, head of engineering at AI integration platform Inc., was also reported to have attempted to search for jobs on the Internet without instructions.

Growing debate over AI behavior

In May 2025, there was a widespread debate about AI security after claims were made that the Cloud 4 Opus model developed by researchers at Anthropic tried to hide its intentions and remain active.

Experts say that the new incident in the “Rome” experiment is a reminder that as powerful AI systems develop, they need to be monitored, tested, and controlled more tightly.