Hi, I have been developing a distributed solution around oclHashcat and I am here to hear your thoughts on this, perhaps request features and to find out, if there is actualy someone out there with the same picture in mind.
My solution is a centralized one with a web server written in PHP+MySQL (might be swapped for SQLite). The server keeps computing tasks, files related to those tasks (rules, wordlists, ...) and hashlists for those tasks. It also keeps list of computing agents.
Computing agent is aimed for BFUs with powerfull GPUs, Windows system (partly because my programming knowledge is limited to C# and PHP) and very little IT knowledge. In other words, gamers, which are willing to contribute their computing power but not able to operate in command line and stuff like that.
The operator can generate an agent deployer, which contains hardcoded address for his/her server. Once the deployer is started at the target machine, it detects platform (32/64 and ati/nvidia) and downloads oclHashcat specific for this platform (to avoid unnecessary traffic). It then checks the server for its assignments.
If the operator assigns an agent to a specific tasks, the agent downloads all files and hashlist related to this task and then requests a chunk of work. If it is first chunk for this task, the server orders the agent to execute a benchmark for this specific task. The result is then extrapoled, so each agent is given exactly 5 minutes (server side configurable) worth of computing chunk.
The assignment of an agent to a task bears a certain level of agressivity as an integer number. This number essentialy says, how many seconds the agent computer needs to be inactive, before it starts cracking where 0=crack all the time.
The results (e.g. cracked hashes) are transferred in real time while they are being cracked. After each completed chunk, the benchmark of an agent for this task is adjusted to maintain more or less constant time length of chunks. If a chunk has not been marked as completed for more than 3 times (configurable) of its size, it is reassigned to another agent who first requests a new chunk after the previously mentioned chunk has timeouted.
When submitting completed hashes to server, as a response the agent receives a list of hashes that were in the meantime cracked by other agents so they could be removed from the local hashlist. You may be familiar with a term "zap" for this actions, and my solution implements that. A zap takes action in the next chunk (so mostly in 5 minutes) untill i find a way to instruct running oclHashcat to quit and restore, thus reload the zap-stripped hashlist.
As for now the agent operates as a Windows console application with further aim to run it as a Windows service to make it possible to crack even before user logs in.
It was pointed out to me that with so much control on the server side and automation on the agent side, my solution pretty much resembles a botnet.
I would like to hear your thoughts on this as well as feature ideas.
I will be publishing server and agent source codes. Agent binaries will be generated directly from the server web gui. I would also be glad if there was anyone who liked the idea enough to port the agent on Unix-based systems, as this is beyond my programming skills.
My solution is a centralized one with a web server written in PHP+MySQL (might be swapped for SQLite). The server keeps computing tasks, files related to those tasks (rules, wordlists, ...) and hashlists for those tasks. It also keeps list of computing agents.
Computing agent is aimed for BFUs with powerfull GPUs, Windows system (partly because my programming knowledge is limited to C# and PHP) and very little IT knowledge. In other words, gamers, which are willing to contribute their computing power but not able to operate in command line and stuff like that.
The operator can generate an agent deployer, which contains hardcoded address for his/her server. Once the deployer is started at the target machine, it detects platform (32/64 and ati/nvidia) and downloads oclHashcat specific for this platform (to avoid unnecessary traffic). It then checks the server for its assignments.
If the operator assigns an agent to a specific tasks, the agent downloads all files and hashlist related to this task and then requests a chunk of work. If it is first chunk for this task, the server orders the agent to execute a benchmark for this specific task. The result is then extrapoled, so each agent is given exactly 5 minutes (server side configurable) worth of computing chunk.
The assignment of an agent to a task bears a certain level of agressivity as an integer number. This number essentialy says, how many seconds the agent computer needs to be inactive, before it starts cracking where 0=crack all the time.
The results (e.g. cracked hashes) are transferred in real time while they are being cracked. After each completed chunk, the benchmark of an agent for this task is adjusted to maintain more or less constant time length of chunks. If a chunk has not been marked as completed for more than 3 times (configurable) of its size, it is reassigned to another agent who first requests a new chunk after the previously mentioned chunk has timeouted.
When submitting completed hashes to server, as a response the agent receives a list of hashes that were in the meantime cracked by other agents so they could be removed from the local hashlist. You may be familiar with a term "zap" for this actions, and my solution implements that. A zap takes action in the next chunk (so mostly in 5 minutes) untill i find a way to instruct running oclHashcat to quit and restore, thus reload the zap-stripped hashlist.
As for now the agent operates as a Windows console application with further aim to run it as a Windows service to make it possible to crack even before user logs in.
It was pointed out to me that with so much control on the server side and automation on the agent side, my solution pretty much resembles a botnet.
I would like to hear your thoughts on this as well as feature ideas.
I will be publishing server and agent source codes. Agent binaries will be generated directly from the server web gui. I would also be glad if there was anyone who liked the idea enough to port the agent on Unix-based systems, as this is beyond my programming skills.