TCP/IP is the dominant transport protocol used in the Internet. The most useful service semantics provided by TCP is that of “reliability” and flow control. In order to avoid congestion collapse in the Internet, TCP also incorporates mechanisms for “congestion control”. While this mix of mechanisms seems to work well for traditional applications such as the web—which is characterized by short TCP transfers—it is not clear how well do these mechanisms work for newer applications (large file transfers) and high-speed networks. It is the goal of our research to understand the scalability of traditional TCP mechanisms for newer requirements, and to explore the design of alternate mechanisms where traditional ones fail.
Over the past few years, we’ve been involved in several projects related to the design of scalable monitoring infrastructures, accurate probing techniques, as well as passive analysis of traces of real Internet sessions. Details about these projects can be found here .
Most of these projects involve the use of several skills—as illustrated below—to conduct research, including analytical modeling, mechanism design, passive analysis, implementation in a prototype, and experimentation.
With the above-described research theme in mind, the research problems included in this site are of immediate interest (and build on our recent work in this area). Some of these projects are smaller in scope than others and would be pursued in collaboration with existing PhD students. All of these require excellent experimental and analytical skills.