Tag Archives: Lawrence Kearney

GRU staff collaborates with Netherlands-based HPC team

Lawrence Kearney
Lawrence Kearney

By Lawrence Kearney

Technology Support staff member

While presenting at an international academic conference sponsored by the Technology Transfer Partnership in the Netherlands recently, I had the opportunity to tour two very different radio telescope sites and the extraordinary High Performance Computing Center at the University of Gronigen, interviewing the staff that manages it.

Astronomers use these sites to  study the detectable radio wave energy that emanates from space itself and the celestial objects it contains. Much like a car radio picks up different stations, information can be gleaned about astronomical objects, such as distance, rotation, or size by measuring their radio wave emissions using radio telescopes to “tune into” them.

I realize that radio astronomy and human sciences-based research efforts are very different on the surface. However, when we consider the core computing technologies that enable real progress in both, some real operational commonality begins to emerge.  I hope my access to a facility with such a mature HPC implementation will be of benefit to GRU in our existing and future HPC efforts.

My introduction to the university HPC environment began with a very well-formatted and informative radio astronomy and HPC primer presentation.  This not only prepared me for the telescope site visits, but helped me understand the basic design of the university HPC environment and how it evolved. The evolution of the HPC environment grew from the initial astronomy research need but continued to grow to assist other university research efforts as well. I then boarded a coach and set off for the radio telescope sites  . The first site visited was the Exloo Antenna fields, followed by the Westerbork site.

A basic concept in radio astronomy is the bigger the dish, the better the data collected is.. However, there are obviously limits to how big a radio telescope dish can physically be. This is why arrays of dishes are often used in radio astronomy. This “dish aggregation” method works well, even across multiple telescope sites. However, the Exloo site is not a traditional radio telescope site. It is part of the Low Frequency Array project that uses a virtual radio dish implemented in software.

The site currently uses over 25,000 small inexpensive ground-level antennas in over 48 sites to construct its “virtual dish,” and that number continues to increase. The virtual dish is created by adjusting signal timings from the outermost to the innermost antennas to simulate the dish shape and size electrically. Using this technology, a telescope dish several kilometers wide can be simulated. I was very comfortable with this concept because the virtualization of computers within the computer science field has been well-established for many years. Using virtualization technology to “build” a radio telescope dish is innovative and remarkable across scientific, computing, and economic circles.

The Westerbork radio telescope site is a more traditional site in that it uses 14 separate 25-meter wide above-ground dishes that work collectively. The site was originally completed in the 1950s but has seen upgrades at the turn of the century and this year. I was lucky enough to be at the site when the dishes were being adjusted. Watching them all move in unison was thrilling. The Westerbork site can also work cooperatively with other traditional radio telescope sites increasing its effective dish footprint even more. After visiting the Westerbork site, I was amazed at the stark difference in the technology used at each site. The Westerbork site was remarkable and innovative for its day and with its upgrades even by today’s standards, but the Exloo site demonstrated how the same level of innovation can be used to even greater effect with current technologies.

Both of these sites generate vast amounts of raw data that have to be correlated, processed, analyzed, and finally stored or archived. These tasks are certainly within the realm of supercomputing. Traditional supercomputers are very, very expensive. Academia’s contribution to fulfilling this need has been in the vein of what we’ve always done best: making do with what we have through collective effort and volunteerism to fulfill a need for a humanitarian effort. HPC became the low -ost supercomputing platform implemented and shared by higher education.

Before we go further I’ll delve into some HPC 101. Two things loom largest in HPC computing concepts:  using lots and lots of processor cores, and being able to very precisely manage the workloads of those cores.  The initial concept is satisfied by interconnecting low-cost, and often second-hand, server hardware into “clusters” that aggregate their computing power. HPC clusters can contain hundreds of servers with anywhere from 10 to 64 cores each. When these cores work collectively, they can manage almost any intensive research computing workload.

The second concept goal is satisfied by specialized software specific to HPC. This software is aptly called a “scheduler.” The scheduler manages work submitted to the cluster (called “jobs”), assigns computing resources to the jobs, and keeps the cluster running at maximum capacity at all times.  So micro-management in the context of HPC is the goal of any reputable scheduler.  HPC is very attractive and in high demand in research environments, so selecting the right scheduler software is extremely important.

Finally, a tour of the university HPC implementation at the Donald Smits Centre for Information Technology, and access to its very talented and specialized staff, was made available to me. The University of Groningen houses three HPC clusters. One is dedicated primarily to the LOFAR project, but all three pool their resources using unique scheduler software. Not only does it manage jobs within each cluster, but it analyzes each job submitted by a researcher and assigns it, or parts of it, to the HPC cluster best suited for the computer tasks. The job results are collated in one final report even if it was assigned to all three clusters. It was a remarkable and extremely valuable visit for the knowledge and insight it provided.

Academic staff across all disciplines and institutions of higher education from all over the globe attend TTP conferences.  A primary goal of the TTP is to foster collaboration and technology transfer among institutions of higher education and K-12. Whenever possible, the content of the conferences is expanded to include the opportunity and access to remarkable university programs,  research efforts, and staff. The conferences are held semi-annually in the United States and abroad. GRU maintains membership on the Advisory Board for Higher Education in the Americas (through myself) and is a respected presence in conference attendance and contribution.

Lawrence Kearney is a Technology Support staff member at GRU. He presents and facilitates at international academic technology conferences regularly, and is learning to speak Brazilian Portuguese.

More information:

LOFAR project

Westerbork site