Belnet is a Peer-To-Peer network. It is limited to the Local Area Network at Beloit College. There are three types of users involved.
Users do not have any special software installed. They can download and upload files on the network using their web browser.
Hosts are autonomous individuals who host their own belnet server (node). Hosts can put anything they want on their node, and even customize its code if they want.
Developers are Hosts who also know the password to the external validation server. They can update the blog, monkey around with the code, and push out auto-updates to all nodes.
(For a technical discription of this process, see the Code Overview)
Instead of putting all the files in one place, belnet aims to distribute them across as many computers, or nodes, as possible. This way, traffic will be spread out, and the network will be both more robust and also less likely to cause alarm bells to ring over at ISR headquarters.
However, this causes a problem. If there is no central server to act as an index, how do users find all of the files that are spread out across the network?
Well, why not make every server an index server? This is what every node in belnet does:
Every 10 minutes, I do the following: Send requests to every peer address in my database. ME: What's my IP?. What IPs do you know of? PEER: You are coming from 144.89.blahblah. PEER: Here are the contents of the IP list in my database... Store the IP that everyone told me I have. Add any new addresses from peers' responses to the database. Send requests to every address in this server's database. ME: Hey, this is my IP. Who are you? What files do you have? PEER: I added your IP to my list of IPs. PEER: I am Node X. I have the following files... Compile a list of other currently online nodes based on the responses. Compile a list of files and who hosts them based on the responses. Store both in the database. Contact belnet-nodes.net84.net (external validation & linking server) so that it knows I was online in the last 10 minutes.
In this way, a list of all other online nodes and online files is maintained. They are compiled into a list of links whenever a client requests the file system html page via thier web browser. For a technical discription of this process, see the Code Overview
Node as a build serverEdit
Because there is no central location on the beloit college network where belnet is hosted, every node must be able to offer an installer to a client who wants to become a host by installing thier own node. This is why there aren't separate builds for mac or windows; no matter what, when installing a node, you need both installers, because you are going to serve them to other people later.
This is what happens when someone requests a build from a node:
The node checks its database to see if it issued a build of this version already. If it has, it simply returns that file. Otherwise, it will do the following: Create a new zip file. Copy from its server root to a folder called common in the zip file (the server root folder is htdocs, it is where the php pages live) On Mac, it is at /Applications/XAMPP/xamppfiles/htdocs On Windows, it is at C:\xampp\htdocs The copy goes like this: htdocs --> zipfile/common/htdocs WITH EXCEPTIONS: nodePassword.php, the shared folder, the update folder, the builds folder, and the folder where the installers live (htdocs/build). After that, the installers will be copied to the zipfile separately. htdocs/build/Mac --> zipfile/Mac htdocs/build/Windows --> zipfile/Windows Finally, the build is recorded in the database and the zipfile is sent to the user.
After the user recieves the zipfile, the process is reversed on thier own computer.
Apache is installed from either the mac or windows installer. Setup script: The htdocs folder in apache is replaced by the contents of common/htdocs. Empty builds, shared, build, and update folders are created inside htdocs. A new nodePassword.php file is made based on the password the user entered. Mac and Windows installers are copied to the build folder.
Auto-Update & AuthenticationEdit
Auto-updates are built on top of the previous two processes. Whenever a node updates its online peer/online file list, it also checks to see whether any other nodes have a higher version number. If they do, then:
The node with the lower version number will request an update build from the node with the higher version number. To produce the update build, the node with the higher version number will create a new zip file and copy htdocs ---> zipfile/htdocs This time, the same exceptions apply, but the build folder is included as well, only without the hefty (~130 MB) apache installers. As the files are being copied, the fingerprint of each file is taken. (md5 hash) When the zipfile is complete, the md5 of all the fingerprints is sent to the validation server. If the validation server is currently accepting new fingerprints, it will be stored. When the node with the lower version number recieves the zipfile, it will unzip it and take the fingerprints of all files in the same manner as the higher node did, eventually calculating one final md5 hash. This md5 hash is sent to the validation server, and if the hash exists in it's database, the validation server will send a "true" response. Only then will the server with the lower version number's copy and replace its htdocs folder with the contents of the zip file.