18 Feb Interview with Vint Cerf, VP at Google
Vinton Gray Cerf is an American computer scientist who is considered one of the fathers of the Internet. Along with Bob Kahn, he co-invented the TCP/IP which makes everything on the internet work. At present, he is a Vice President and Chief Internet Evangelist at Google. Among his many distinguished honours include the ACM Turing Award (considered to be Nobel in computing) and the Presidential Medal of Freedom. Our managing editor, Manjil Saikia took an audio interview of Vint Cerf at the Heidelberg Laureate Forum (HLF), held in September 2016 at Heidelberg, Germany. Below is a transcribed version of the interview. The original audio file can be played below.
Manjil Saikia (MS): What motivated you to develop TCP/IP with your colleague?
Vint Cerf (VC): This was a project that the American Defence Department sponsored and it was based on earlier results that they had gotten from the testing of packet switching within a network they called ARPANET that stood for Advanced Research Projects Agency Network. This was an experiment to connect different types of computers, different brands of computers on a single homogeneous network and to get the diversity of the machines to be overcome by standardizing the protocols between the computers. I worked on that project with my colleagues while I was in UCLA, and then after we realized that we can do this kind of thing, the Defense Department asked the question, Can we use computers in command and control? And when you begin to answer that question, you realize that some of the computers will have to be in mobile vehicles, some of them have to be in ships at sea, some have to be in airplanes, and others in fixed installations. But in order to network the computers, you might need mobile radio communications, you need satellite communications especially for ships at sea. You might need dedicated facilities to connect the fixed computers together. So, they have different kinds of packet switch networks in different modalities with different speeds, different error rates, and different packet sizes. And somehow you have to make all of that look uniform from the standpoint of the computers no matter which network they are on. And we have to make the system expandable to an arbitrarily large number of networks. That was the internet part of it. And so Robert Kahn, who had worked on the ARPANET and subsequently went to ARPA and I got together when I went from UCLA to Stanford University, and so the two of us did this work, design work in 1973. And in over a six month period we came up with the basic architecture of TCP and the internet and we published a paper in the subsequent year in May, 1974.
So the motivation was essentially a funded research project from the Defense Department with an application in mind, specifically command and control. And there is one other implication that’s important. And that is that if you are going to use computers in command and control, not only will you be moving data around, but you may need voice communication and video communication. And so whatever this packet switch system did, it would have to carry voice and videos, which was data. So, that put quite a big challenge on one plate. That to get one technology that would harness all of the diversity of the networks and computers and handle these various kinds of applications including real time communications.
MS: So, when you did this, did you have in mind that this would become such a global phenomenon that everyone would be using it in 20-30 years’ time?
VC: Well, I wouldn’t say that we had quite that vision at the beginning. We were serving the Defense Department. We knew the Defense Department had to work on a global scale. They had to be able to go anywhere, set up their communications, all their systems had to work no matter where they went. So the design was deliberately global and also we did the design in such a way that none of the networks were bound to any particular countries and this was nothing like the telephone system with country codes and things like that. We eliminated all of that. It wasn’t really until 1988, that I recognized that there was a possibility that this could be made available to the general public. Up until that time the networks have been built and used for research, for academics, for the military but it was always government sponsored. So, it was only in 1988 or 1989, we saw the possibility of using this and providing this capability to the general public, and it was in the year 1989 when we had in the US three commercial internet service come up. One was called UUNET, one was called PSINET and another one was called SURFNET in the San Diego area. And really that was the beginning of commercialization of the system.
Not long after that, the World Wide Web showed up thanks to Sir Tim Berners Lee at CERN. His first work wasn’t really very visible. But few people at the National Center for Supercomputer Application at Champaign-Urbana, at the University of Illinois, built a browser called Mosaic which was a graphical user interface. The consequences of that is to have turned the internet into a very colourful magazine-like format that caught everyone’s attention. It created a very easy path for sharing information with people whether it was imagery, or eventually video and sounds, formatted texts and so on. That happened around 1993, followed by the creation of Netscape communications which went public in 1995, stocks rose to the roof and we launched the dot boom. So, for me recognition that this could be a global phenomenon comes around 1988 and from my point of view that was around 15 years into the project.
MS: Do you like the way the internet has shaped up in these years?
VC: Well! Mixed feelings. First of all, I wish more people had access to it. We think, may be, half of the world is online right now and that means at least three and a half billion more people to bring online. I think the benefits of the internet and the World Wide Web and other applications have been pretty positive on the whole. We know that there are bad things that happen on the net, their accounts get hacked and malware goes around in the net and causes a lot of trouble. But, this is a classic example of a technology that could be used either for a positive or negative thing that sort of goes along with the infrastructure which tends to have the same characteristics. At the same time, there is still a lot of work to be done to make this a safer environment for people to contribute to and to work in. There is also a lot of infrastructure to be built. On the whole though, I think the positive benefits have been striking. Economically speaking, a great deal of online commerce has happened. The numbers are in the trillions of dollars a year now. The ability to get access to information when you need it, especially now, when we have smart phones is quite remarkable. Wherever smart phone is, the internet can be; because it is accessible through the smart phone. At the same time, the smart phone is made more useful because it has access to the content of the net. So, on the whole, I think we still have quite a lot of new applications to be explored and we have work to do to make it safer, more secure and private environment.
MS: You started out with a degree in mathematics.
VC: That’s right. I was an aspiring mathematician at Stanford University in the 1960s.
MS: And what inspired you to go into computer science later?
VC: Well, I had already been exposed to computing by the time I got to Stanford. I had started writing software for a now ancient machine called Bendix G-15 at UCLA in 1960. And so, I had already gotten very interested in computing and fortunately Stanford, by the time I got there in 1961, already had computers on the campus. They had brought those B5500 or 5000s or 5500 machines. They had a PP1 time sharing system which John McCarthy brought from MIT. I took every computer science class I could take at Stanford and when I finished my mathematics degree, I went to work for IBM and spent two years in Los Angeles running a time shared data centre. It was an application called QuickRAM where what you do is remote FORTRAN programming that allows you to run access to a time shared machine in order to get computational results. And it was the two years at IBM that persuaded me that I needed to go back to school to learn a lot more about the theory of computing as opposed to the practice of computing. So, I went to UCLA because that was convenient to where I was working to take my PhD. And while I was at UCLA, I got infected with the ARPANET project. So, I went from mathematics to computing, to networking, all within a course of about five years and ended up at Stanford University on the faculty doing the internet project with Robert Kahn and others.
MS: Was this ARPANET project a part of your PhD thesis?
VC: No. Actually, the ARPANET project was completely independent. My PhD dissertation was done under Gerald Estrin and it was all about modeling of parallel programs, trying to take a serial program and convert it to run in parallel on a parallel processor and figuring out whether and how you could do that without having any collisions with the programs running in parallel without interfering with each other because they were trying to use the same memory locations. So we had to figure out how to avoid that. So this was all about analyzing program behaviour to see under what conditions you could run things in parallel.
MS: What is your current work in Google focused at?
VC: I am the Vice President and Chief Internet Evangelist at Google. I am in the research department officially. I spend a lot of my time on travel. I spend most of my time on internet policy issues because there are a lot of them whether they are financial or legal, or social. The internet touches so many different things that my concerns about internet governance for example and protecting people from harm or enhancing access to web or increasing the utility of it are all topics that are very much on my plate. But, I have also have been spending a fair amount of time looking at the Internet of Things, that will be a great concern for liability and safety and inter-operability.
I am also very interested in and concerned about a problem I will call Digital Dark Ages. And it has to do with the fact that the digital material usually needs a software in order to be either rendered or to interact with it like a spreadsheet and unless you can run the program, that is unless you can run the software, these digital objects may not be useful anymore in 100 years from now. Something that you did now may not work anymore because you can’t run the program. So, I am concerned about preserving digital content by making sure that we can still run the software that created it. And this is on the presumption that you can’t transform and translate a piece of digital information that a new piece of software can manipulate. You may have to run the old software, the old operating system and maybe even emulate the hardware. This digital dark age problem is going to become very apparent to all the people who are taking pictures with their mobiles when ten or fifteen or twenty years from now the formats would no longer be interpretable because nobody bothered to write backward compatibility into the photo software, for example. So, those are the things that I am mostly concerned about.
MS: The thing you mentioned about, Digital Dark Age. This, I face because when I make diagrams in a software called IPE, and then use a newer version, it does not work anymore.
VC: Yes, we all have had experiences like that. I have old power-point files for example that are no renderable with the new power-point software. And I am not necessarily blaming Microsoft, its very hard to maintain backward compatibility. But it does mean that the effort that we put into creating content may be lost if we don’t find a way to preserve its meaning.
MS: Is there some specific place where you are doing these preservation?
VC: My offices are located in the Washington DC area, but the people I am working on this with are scattered all over everywhere. There are a number of places which are very interested in this problem. There is a group called the International Internet Preservation Consortium which meets periodically. Its last meeting was in Iceland actually. Also, there are several hundred people who are worried about this. Brewster Kahle is running the Internet Archive in San Francisco. He is very concerned about this. A man named, Mahadev Satyanarayanan is at Carnegie Mellon, he has built emulators of all hardware so that you could run all operating systems and old applications very successfully. Thus, he has demonstrated at least one example of a technical response to this Digital Dark Age problem. Librarians are also quite interested in this. Hence, there are lot of people with an interest.
I would not say that we have reached the point where we have a lot of common understanding about what to do; but the problem is more complex than just technical. It is a big technical component. But, there is also a legal component too. If somebody may own the application software or the operating system or even the instructions set, then there may be questions about what permissions one need to have to run an emulator that runs the operating system and if the same could be made available to the third parties who did not pay for that. There is also a financial question How do you create an institution that would last for hundreds of years that would preserve the digital content?, and there are not too many organizations that last for 100s of years. The Catholic Church would be an example of institution that has lasted for couple of thousands of years and the other one would be breweries. There are some places that make beer and which have been making beer for the last 500 years. However, I have been unable to figure out the connection between beer and preserving digital content. But, a business model that last that long might have some lessons to teach us.
MS: Going to a separate topic, we are aware about the Inter Planetary internet service. Can you tell us more about it?
VC: That is another external project, not a Google one. This one, I have done with NASA, specifically starting at the Jet Propulsion Laboratory at Pasadena. I have been a visiting scientist there since 1998, not physically there very often though. There is a team with which I started this project to design a new set of protocols that would function successfully across the solar system taking into account the disruptions from planetary rotation, variable delay as a consequence of orbital mechanics and the fact that the speed of light is constant and slow in relative terms. So, those protocols have been tested and are in operation now between the Earth and Mars and the International Space Station (ISS). Thus, we have evidence that we know how to do this and that it’s actually been helpful because most of the data coming back from Mars uses these new protocols to store, forward and relay back to the Earth when the Rover is on the surface to the orbiters, and then back to the Earth. Or in some cases, we have been doing tests from the ISS, controlling robots on the ground with servers on the Earth’s surface emulating what we might do if we had astronauts in the orbit around Mars steering rovers around from their orbiting space station.
MS: And how soon is it going to be implemented in a wide scale?
VC: This is not the sort of thing wherein you build everything and hope that somebody would use it straightaway. The secret behind our success, that is if we are successful, is that each new mission to explore parts of the solar system will require a spacecraft of some kind. The robotic missions sometimes, take years to start because it takes may be 9-10 years to get to Pluto, for example. But, those spacecrafts, once they completed their scientific studies, can be re-purposed as relays in a backbone network. So, what we think will happen is that we will launch these new missions and as they finish their primary scientific mission, they will be re-purposed as nodes in a backbone. All of these have been designed so that we can upload new software and that is smart because sometimes you need new functionality and sometimes there are bugs that have to be fixed and that makes it possible for us to actually consider building or acquitting overtime the interplanetary backbone.
MS: So, this is a very long term process?
VC: Yes, this is a very long time process. I won’t see the end of it, I am sure. But, that’s okay. I don’t mind being not out on the end front.
MS: Coming to something related to India, we are aware that you are in the governing council of IIIT, Hyderabad. Your views?
VC: Yes, that’s correct. I guess P. J. Narayanan is the Director of the institute now. He is a good friend because he was the president of ACM India Council. His presence was extremely useful to the ACM and making our work more visible. I am quite excited about his university especially because, it is a big experiment. First of all, it is not a national university. It is one that was established in the province and second, he does not have departments in the classical sense. He has colleges that are focused on particular programs. And, I think that is the right way to go in the 21st century where multidisciplinary attention to a particular problem is needed. His success is an important indicator of what is possible in the university structure. That’s why I am so interested in how well that works out.
MS: Do you visit the institute?
VC: Not very often. Most of the times, we on the governing board have telephonic conference calls and we consult that way. But, I look forward to being on the campus when I can.
MS: Do you have any idea about the differences that you see in the Indian system and the US education system?
VC: I have visited a number of campuses in India. I have to say two things in this regard. First of all, the quality of students that I meet coming to the US is very, very high. You can see the remarkable success of the Indian emigrants, especially in the Silicon Valley. Look at Google and Microsoft; they are both headed by two very smart people from India. So, here’s what I have noticed. The first thing is that the facilities that are available in India are still relatively weak compared to their counterparts in other parts of the world. Infrastructure in India is still a big challenge; whether it is getting electrical power on a consistent and reliable basis or getting roads built. I think there are a number of reasons for that. But, I don’t know why getting internet deployed is a big issue. Prime Minister Modi has very ambitious plans of getting a fibre network out to 2,50,000 villages and extending that to another 3,50,000. But, that’s going to take a concerted, co operative effort. And, it is going to be costly to do that. Thus, one thing is infrastructure and facilities really need to be improved in India just for the benefit of the students.
Second thing is that, watching the students as they move into a place where there are more resources available tells you that the best students in India are as good as in any other place in the world. Unless they get into a place where resources are available it is very difficult to make significant progress. One of my Indian student had a very successful career at Apple, at XEROX Parc and at a venture capital company because they had resources available. One thing that I think India needs is to put more and more resources for the students and the professors so that they can take the students to the latest techniques. Now, with the internet spread maybe that will help because you can remotely get access to resources which you do not have physical access to. This was the theory behind ARPANET as well which connected universities, so maybe that will be at least one little step in the direction of improving that metric.
MS: Do you think that tech companies like Google, Microsoft or Facebook will have a big role to play in such kind of things? For instance, helping some countries make their network more robust?
VC: Well, we care a lot about research and funding research at Google and the same can be said certainly about Microsoft; I am less clear about Facebook’s policy. I think that we have been growing our presence, Google’s presence, for instance at Hyderabad and Mysore. Microsoft is doing the same kind of thing. We have benefited from good quality students coming from India, but I still think that their experiences would be much improved if they had access to more resources while they were at school.
MS: Thank you very much for your time.
VC: I am glad we had a chance to chat. Sorry, we didn’t have more time. We actually covered quite a lot already.
(Angshuman Deka and Salik Miskat Borbora assisted with the the speech-to-text part.)
Managing Editor of the English Section, Gonit Sora and Research Fellow, Faculty of Mathematics, University of Vienna.