A white paper outlining a reasonable architecture for the Computing and Networking Infrastructure at South Pole was written by former ASA employee Marty Lyons. Marty requested input from those science groups present at SP during mid-december 1991, and the resulting paper reflects those recommendations.
At the science meeting hosted by Johns Hopkins Applied Physics Lab last month, it was decided to send a copy of this report to all science users of the South Pole Station. Please send (preferably email) all comments to me by May 15 and I will include them in a final copy to be sent to NSF as a recommendation from the science community.
Bob Loewenstein Dir. Computing & Communications Center for Astrophysical Research in Antarctica Yerkes Observatory Williams Bay, Wisconsin 53191
Networking ---------- The South Pole net needs to be segmented both for reasons of fault isolation as well as future expansion. The network infrastructure buildout will lay the foundation for the next several years of adding additional computing resources and services.
Jane Ozga had spoken to Synoptics and cisco, and received information from them on a Synoptics hub which supported slide in cisco router cards, as well as a Farallon card to drive a Appletalk network. That chassis would form the foundation for the new South Pole network.
What we would like to see would be:
Most places within the dome can utilize existing phone circuits to build a Macintosh Localtalk network as required. Remote sites where we will be installing fiber should utilize that as the primary media for as many types of communications as possible (data, voice, video, etc.)
The computer center, located within the dome, will be the termination point for all individual network segments. What we would like to see in the computer center rack would be another Synoptics hub with:
Hardware -------- Beginning with a informal survey I took at the Washington, D.C. pre-deployment conference, the wishes of the science community directed this design towards a more distributed computing environment. It is clear, especially here at Pole, that the needs of the science community are not being served by the centralized model of computing. Rather, teams are coming from university and research environments which make available, and stress, a hierarchal model of computing, which yields a much greater amount of flexibility to the end user.
The expansion of the Pole computing environment should progress in two distinct, albeit complementary directions. First, the expansion of the network to include distributed computing platforms, and second, the expansion of the existing VAX/PC environments to handle the increased loads of the expanding network.
In terms of distributed systems, science has made it clear that the systems which are predominant in their home institutions are Unix workstations and Macintoshes. In particular, Sun workstations, high end Macs and PC's.
Towards this end, I believe South Pole should procure the following hardware to give us a broader and more powerful computing base:
I recommend a high end Sun server, such as a 4/470, or 4/490, with a minimum of 64 Megabytes of memory, and a minimum of 6 Gigabytes of disk space. The large memory requirement is to allow the system to drive lower end workstations remotely, or X terminals, as well as run large compute intensive tasks such as mathematics, data reduction, and visualization. This system would be the real workhourse for any large CPU or disk constrained job. With most workstations at Pole running X, I envision this as the one machine which will be doing a lot of number crunching, and file transfer, since it will house the disk farm.
The large disk space requirement is to allow the system to hold enough data online for analysis, to allow us to load an entire Exabyte tape (2.3 Gigabytes), for storage of copies of the operating system for its diskless clients, such as X stations, and also for storage of all software products, which will then be made available to the rest of the network by file sharing software such as NFS or Andrew File System (AFS).
The system should also be purchased with both an Exabyte tape drive, model EXB-8200 (the 8500 has known firmware problems as of this writing), as well as a 6250/1600 bpi 9-track tape drive, CD-ROM drive, and a FDDI board.
We would like to consider and should get a quote on the cost of upgrading the CPUs in the two systems. We have installed Multinet (TCP/IP software for VMS) on both systems, and they are now being connected to, and will be serving more remote stations because of the TCP accesibility.
Also note that I ordered two additional terminal servers at the beginning of this season, from Lantronix, which support both LAT compatibility and TCP/IP in the box.
Software which should be purchased includes:
Satellite Communications ------------------------ The subject keeps coming up, and it will get to be a pain for everyone very soon unless we get more bandwidth. Already the CARA group is talking about a requirement of moving 200-600 Megabytes a day of data back to the states. Even without the CARA load, overall needs of science continue to grow, the amount of email traffic is increasing, and we are making life difficult all around by not having a high speed satellite link. A minimum of a 9.6kb/sec link, and equipment to drive the link as a synchronous channel so Pole could be on the Internet during satellite visibility, should be a minimum requirement. The level of service we could provide to our customer base here at Pole can increase exponentially merely be being able to connect to the Internet, even if just for a few hours a day. (GOES2 -56kb for 2 hours) ------------------------//----------------------------------